text stringlengths 2k 32k |
|---|
# Ellagic Acid Prevents Particulate Matter-Induced Pulmonary Inflammation and Hyperactivity in Mice: A Pilot Study
## Abstract
The inhalation of fine particulate matter (PM) is a significant health-related environmental issue. Previously, we demonstrated that repeated PM exposure causes hyperlocomotive activity in mice, as well as inflammatory and hypoxic responses in their lungs. In this study, we evaluated the potential efficacy of ellagic acid (EA), a natural polyphenolic compound, against PM-induced pulmonary and behavioral abnormalities in mice. Four treatment groups were assigned in this study ($$n = 8$$): control (CON), particulate-matter-instilled (PMI), low-dose EA with PMI (EL + PMI), and high-dose EA with PMI (EH + PMI). EA (20 and 100 mg/kg body weight for low dose and high dose, respectively) was orally administered for 14 days in C57BL/6 mice, and after the eighth day, PM (5 mg/kg) was intratracheally instilled for 7 consecutive days. PM exposure induced inflammatory cell infiltration in the lungs following EA pretreatment. Moreover, PM exposure induced inflammatory protein expression in the bronchoalveolar lavage fluid and the expression of inflammatory (tumor necrosis factor alpha (Tnfα), interleukin (Il)-1b, and Il-6) and hypoxic (vascular endothelial growth factor alpha (Vegfα), ankyrin repeat domain 37 (Ankrd37)) response genes. However, EA pretreatment markedly prevented the induction of expression of inflammatory and hypoxic response genes in the lungs. Furthermore, PM exposure significantly triggered hyperactivity by increasing the total moving distance with an increase in moving speed in the open field test. On the contrary, EA pretreatment significantly prevented PM-induced hyperactivity. In conclusion, dietary intervention with EA may be a potential strategy to prevent PM-induced pathology and activity.
## 1. Introduction
Air pollution continues to threaten public health in many cities and endangers the basic right to breathe. Diesel exhaust particles (DEPs) consist of a carbon core that adsorbs a mixture of sulfate, nitrate, metals, and organic chemicals, including polycyclic aromatic hydrocarbons (PAHs) and nitro-PAHs. DEPs are one of the major components of urban air pollution [1]. DEPs comprise mainly fine particulate matter (PM) with diameters less than 2.5 µm (PM 2.5), including nanoparticles, which can reach the lower lobe of the lung and even systemic circulation. Exposure to air pollutants, including DEPs, is inevitable, and there is cumulative evidence that indicates that continuous exposure to DEPs triggers detrimental effects on the pulmonary [2,3], renal [4,5], hepatic [6], cardiovascular [7,8], and nervous [9,10] systems. In particular, DEP inhalation significantly induces pulmonary inflammation, oxidative stress, and malfunction in mammals [11,12].
In addition, increased exposure to air pollutants triggers behavioral disorders in humans and experimental animals. The worsening degree of air pollution is closely intertwined with the early onset of attention deficit hyperactivity disorder in Taiwan [13,14]. In the US and Denmark, increased inhalation of air pollutants increases the incidence of psychiatric disorders, such as depression, bipolar disorder, and schizophrenia [15]. Although the exact developmental mechanisms of direct pathological causes of behavioral disorders are poorly understood, environmental challenges may be significant initiators of behavioral disorders [16,17]. In addition, in experimental mice, exposure of dams to air pollutants during pregnancy triggered hyperactivity in the pups [18,19]. Furthermore, we have previously demonstrated that PM instillation in relatively young adulthood (8~10 weeks) triggers hyperactivity in mice [20,21]. Interestingly, dietary intervention with phenolic components successfully prevented PM-induced hyperactivity in experimental mice [21].
If exposure to air pollution is inevitable, then dietary intervention with functional materials may be an excellent preventive means to attenuate and/or prevent air-pollutant-induced physiological disturbances [22,23,24,25]. Polyphenolic components are strong candidates for coping with exposure to air pollutants, given that polyphenols are abundant in plants and possess multiple biological functions, including anti-inflammatory [26,27,28], antiendoplasmic reticulum stress [29,30], and antioxidative effects [31,32,33]. Among polyphenols, ellagic acid (EA) may be a promising candidate to mitigate and/or prevent air-pollutant-induced pathophysiological responses in humans. EA is a conjugated form of two distinctive gallic acids, known as strong antioxidants, bridged by two lactone rings [34]. Plants (e.g., berries, grapes, and pomegranates) produce EA as a metabolite of tannin hydrolysis [34]. EA attenuates dyslipidemia [35], weight gain [36], insulin resistance [37], carcinogenesis [38,39], inflammatory responses [40,41], and oxidative stress [42,43]. Therefore, owing to its biological functionalities, EA may be a promising polyphenol candidate that can mitigate the effects of air pollutant inhalation.
EA is an excellent candidate for controlling pathophysiological phenomena during the inhalation of air pollutants. Inhalation of air pollutants directly induces pulmonary disturbances such as inflammation. Therefore, dietary supplements against exposure to air pollutants should be effective in mitigating pathological events in the lungs. According to previous reports, EA significantly ameliorated pulmonary damage triggered by various toxicants to the pulmonary system, such as hydrochloric acid [40], carbon tetrachloride [44], elastase [45], bleomycin with cyclophosphamide [46], and ovalbumin-induced asthma [47] in multiple animal models. The protective role of EA against pulmonary toxicants mainly relies on its anti-inflammatory and/or antioxidant effects [40,44,45,46,47]. Pretreatment with EA significantly attenuated LPS-induced acute pulmonary pathology and significantly reduced inflammatory cell infiltration and cytokine production (TNFα, IL-1β, and IL-6) in experimental mice [48].
Based on a literature review, EA may have protective functions against the effects of exposure of mammals to air pollutants (i.e., PM). However, animal models of PM exposure by instillation have only been established recently; therefore, robust experimental data are not yet available. Moreover, the preventive role of EA against pulmonary PM exposure has not yet been fully elucidated. In this study, to understand the protective effects of EA against acute pulmonary PM exposure, EA was orally administered at 20 and 100 mg/kg for 7 days before initiation of PM instillation. After 1 week of EA administration, PM (5 mg/kg) was instilled for 7 consecutive days while maintaining the aforementioned EA administration. To determine the beneficial effects of EA on PM exposure, pulmonary immune cell infiltration, PM loading, cytokine secretion, and mRNA expression were analyzed. Moreover, behavioral alterations caused by PM exposure and EA pretreatment were examined using an open field test (OFT).
## 2.1. Animal Experiments
All experimental animal procedures were previewed and approved by the Institutional Animal Care and Use Committee (protocol # 2002-0023) of the Korea Institute of Toxicology and accredited by the Association for Assessment and Accreditation of Laboratory Animal Care (AAALAC). Seven-week-old male C57BL/6NCrlOri mice (Orient Bio Inc., Seongnam, Republic of Korea) were acquired, acclimatized for 7 days, and maintained in a controlled room at a temperature of 22 °C and humidity of $50\%$ with a 12 h light/dark cycle. The mice were allowed free access to a purified diet (PMI Nutrition International LLC, St. Louis, MO, USA) and filtered distilled water. After the acclimatization period, the experimental mice were weighed and randomly assigned to four groups ($$n = 8$$/group) as follows:[1]Control (CON): $5\%$ dimethyl sulfoxide (DMSO; Sigma-Aldrich, St. Louis, MO, USA) was administered orally for 14 days, and after the eighth day of DMSO administration, distilled water was instilled for another 7 days.[2]PM-instilled (PMI): PM (5 mg/kg; standard reference material 2975; National Institute of Standards and Technology, Gaithersburg, MD, USA) was instilled for 7 days.[3]Low-dose of EA with PMI (EL + PMI): EA (20 mg/kg; Sigma-Aldrich) was administered orally for 14 days, and after the eighth day of EA administration, PM (5 mg/kg) was instilled for another 7 days.[4]High-dose of EA with PMI (EH + PMI): EA (100 mg/kg) was administered orally for 14 days, and after the eighth day of EA administration, PM (5 mg/kg) was instilled for another 7 days.
Fifteen days after the first EA administration, the mice were euthanized by isoflurane inhalation. After sacrifice, the final body, liver, and lung weights were measured. PM instillation was performed 1 h after EA treatment in the EL + PMI and EH + PMI groups.
## 2.2. Histological Analysis and Collection of Bronchoalveolar Lavage Fluid (BALF)
The left lung was fixed in $10\%$ (v/v) neutral-buffered formalin (Sigma-Aldrich) and further processed for hematoxylin and eosin staining as previously described [20,21]. BALF was collected as previously described [20,21], and cells were counted using a cell counter (NC-250; ChemoMetec, Gydevang, Denmark). In addition, cell types in the BALF were distinguished after smearing with the cytospin slide (Thermo Fisher Scientific, Waltham, MA, USA) and staining with Diff-Quik solution (Dade Diagnostics, Aguada, Puerto Rico) as previously described [20,21].
## 2.3. Enzyme-Linked Immunosorbent Assay (ELISA)
Mouse TNFα (Invitrogen, Waltham, MA, USA), IL-6 (Invitrogen), and H2O2 (Biovision, Milpitas, CA, USA) levels in the BALF were analyzed using commercially available ELISA kits. Serum corticosterone levels were determined using an ELISA kit (Abcam, Cambridge, MA, USA). All ELISA procedures were performed in accordance with the manufacturer’s instructions.
## 2.4. Quantitative Real-Time PCR (qRT-PCR)
Total RNA extraction, cDNA synthesis, and relative qRT-PCR analyses were performed as previously described [21]. Primers used for qRT-PCR are indicated in Table 1 and previous literature [20,21].
## 2.5. Open Field Test
An open field test (OFT) was performed 1 h after the last PM instillation. Experimental mice were individually placed in the center of a plexiglass container (42 cm width × 42 cm depth × 42 cm height). The illumination of the plexiglass container was controlled by placing a 100 W lamp 2 m above the floor. The mice were acclimatized for 10 min in an OFT environment, and behavioral indices were recorded continuously for 10 min. Mice movements were recorded using an automated computer system (Ethovision, Noldus, The Netherlands). The distance, duration, and velocity of movements were calculated and expressed in inches, seconds, and inch/s, respectively.
## 2.6. Statistical Analysis
All data from the experiments are summarized and expressed as the mean ± standard deviation for each group. Equal variance of experimental data was assessed using the D’Agostino and *Pearson omnibus* test. If the datasets were normally distributed, one-way analysis of variance (ANOVA) with Tukey’s post hoc test was applied. Otherwise, Kruskal–Wallis with Dunn’s post hoc test was executed. Statistical significance was set at $p \leq 0.05.$ *Statistical analysis* was performed using GraphPad PRISM 5 (GraphPad Software, San Diego, CA, USA).
## 3.1. Alterations in Body and Relative Organ Weights
Pulmonary PM exposure (PMI, EL + PMI, and EH + PMI) did not significantly alter the final body weight at sacrifice (Figure 1A). In contrast, delta body weight and relative lung weight were significantly increased by PM instillation (Figure 1B,D). Interestingly, EA treatment significantly attenuated the delta body weight against PM exposure regardless of the EA concentration (Figure 1B). However, PM-induced lung weight did not change with EA treatment (Figure 1D). There were no significant differences in the relative liver weight among the experimental groups (Figure 1C). EA treatment did not affect the final body weight or the relative lung and liver weights. The overall changes in patterns of delta body weight and relative lung weight exhibited similar trends to our previous studies on quercetin treatment [21]. In our previous findings, PM instillation increased delta body weight [20,21]; however, quercetin treatment inhibited the PM-induced increase in delta body weight [21], similar to the effect exerted by EA treatment in the current study.
## 3.2. Pulmonary PM Loading and Inflammatory Cytokine Secretion
According to our previous findings, PM instillation directly induces pulmonary inflammatory responses in rodents [20]. Similar to our previous findings, in this study, PM exposure resulted in black particles or black pigment-laden alveolar macrophages in the alveolar areas (Figure 2B, blue arrows). In addition, infiltrated inflammatory cells were noted in the peribronchiolar, perivascular, and interstitial regions (Figure 2B, red arrows), similar to our previous reports [20]. Pulmonary PM loading induces infiltration of immune cells and cytokine secretion in the BALF [20], and we postulated that EA treatment would attenuate the recruitment of immune cells and cytokine secretion in the BALF. As expected, PM exposure markedly increased the total number of immune cells and inflammatory cytokines in the BALF. After PM exposure, total immune cells in the BALF were increased ~3-fold compared with CON with induction of the absolute number of neutrophils and macrophages (Figure 3A–C). In addition, a significant increase in the number of eosinophils and lymphocytes was induced in the PMI group compared with the CON group (Figure 3D,E). However, the total number of immune cells was not ameliorated in the EL + PMI and EH + PMI groups compared with the PMI group (Figure 3A–C), similar to the results in our previous study on quercetin treatment [21]. In contrast, the number of eosinophils and lymphocytes gradually decreased in the EA-treated groups compared with that in the PMI group in an EA dose-dependent manner (Figure 3D,E).
PM exposure also upregulated pulmonary cytokine secretion in the BALF. After PM instillation, pulmonary TNFα and IL-6 protein secretion was remarkably elevated, consistent with our previous findings [20,21]. PM exposure elevated pulmonary TNFα and IL-6 secretion in the BALF by approximately 2.6- and 19-fold, respectively, compared with the CON group (Figure 4A,B). However, EA treatment did not prevent the induction of pulmonary inflammatory cytokine secretion in the BALF (Figure 4A,B). In our previous study, quercetin treatment also failed to prevent PM-induced recruitment of immune cells and cytokine secretion in the BALF [21]. Therefore, PM instillation may directly and strongly induce pulmonary inflammation, and EA and quercetin [21] may not fully prevent inflammatory events such as physical PM loading and inflammatory cytokine secretion in the BALF. Moreover, we measured hydrogen peroxide levels in the BALF to determine whether PM exposure may induce pulmonary oxidative stress. PM exposure for 1 week did not increase hydrogen peroxide secretion in the BALF (Figure 4C), consistent with our earlier findings [20,21].
## 3.3. EA Treatment Prevented PM-Induced Expression of Inflammatory and Hypoxic Response Genes
EA treatment did not reduce the recruitment of immune cells or cytokine secretion in the BALF. However, in our previous study, we observed that quercetin exerted anti-inflammatory effects by decreasing pulmonary cytokine mRNA expression [21]. Similarly, pulmonary cytokine mRNA expression was increased in the PMI group compared with that in the CON group. The mRNA expression of pulmonary cytokines such as Tnfα, Il-1b, and Il-6 increased 6.2-, 3.1-, and 1.4-fold, respectively, in the PMI group compared with the CON group (Figure 5A–C). However, EA treatment decreased PM-induced pulmonary Tnfα mRNA expression by 1.9-fold and 2.9-fold in the EL + PMI and EH + PMI groups, respectively, compared with the PMI group (Figure 5A). In addition, EA treatment remarkably reduced PM-induced pulmonary Il-1b mRNA expression by 1.6-fold and 2.1-fold in the EL + PMI and EH + PMI groups, respectively, compared with the PMI group (Figure 5B). Furthermore, EA treatment also reduced PM-induced pulmonary Il-6 mRNA expression by 0.8-fold and 0.9-fold in the EL + PMI and EH + PMI groups, respectively, compared with the PMI group (Figure 5C).
Inflammatory and hypoxic responses are often coincidental physiological events that occur in a site- and cell-type-specific manner [49]. To understand whether [1] PM exposure induced pulmonary hypoxic responses and [2] EA treatment prevented PM-induced hypoxic responses, mRNA expression of hypoxic response genes (e.g., Vegfα and Ankrd37) was assessed in the lung tissue by qRT-PCR. As expected, pulmonary Vegfα mRNA expression in the PMI group was elevated 4.2-fold compared with that in the CON group; however, pulmonary Vegfα mRNA expression was markedly attenuated by 1- and 1.7-fold in the EL + PMI and EH + PMI groups, respectively, compared with that in the PMI group (Figure 5D). In addition, pulmonary Ankrd37 mRNA expression in the PMI group increased by 2.1-fold compared with the CON group; however, pulmonary Ankrd37 mRNA expression was significantly attenuated by 1.2- and 1.4-fold in the EL + PMI and EH + PMI groups, respectively, compared with the PMI group (Figure 5E). Similar trends were also observed in a previous experiment in which PM exposure elevated the mRNA expression of genes for an inflammatory and hypoxic response, which was significantly reduced by quercetin treatment [21].
## 3.4. EA Treatment Prevented PM-Induced Hyperactivity
To evaluate the behavioral effects of EA on PM-exposed mice, an OFT was implemented. In this study, mice in the PMI group exhibited hyperactivity compared with the CON group. The total moving distance, including both the outer and central parts of the plexiglass container, were increased in the PMI group compared with the CON group (Figure 6A–C). Interestingly, PM-treated mice spent a significantly increased amount of time in the central area, whereas quercetin treatment decreased [21]; however, in this experiment, all treatments did not significantly alter the time spent in the central part of the plexiglass container (Figure 6F). Although there were no statistical differences, PM exposure increased the time spent in the central part of the plexiglass container by approximately $30.4\%$ ($$p \leq 0.19$$) compared with the CON group, while EL + PMI and EH + PMI decreased the central staying time by approximately $75.5\%$ and $85.6\%$, respectively, compared with the PMI group. In conjunction with, in all groups, no significant changes were noted in the staying time on the border of the plexiglass container (Figure 6E). The total movement speed of the PMI group was significantly elevated with hyperactivity (mean and maximum speed) at the border of the plexiglass container (Figure 6G,H,J). For the PMI group, the maximum speed at the central area did not significantly increase (Figure 6K); however, the mean speed at the central area significantly increased (Figure 6I). In contrast, in the EL + PMI and EH + PMI groups, the hyperactivity observed in the PMI group was reduced. Subsequently, we proposed that amelioration of PM-induced hyperactivity with EA treatment may involve serum corticosterone, a gold standard to assess stress levels. To answer our extended research question, serum corticosterone levels were analyzed using an ELISA kit. However, there were no distinguishable differences in serum corticosterone levels among the experimental groups (Figure 6L), as in our previous study [21].
## 4. Discussion
In this study, we investigated the potential protective effects of EA against PM-induced pulmonary pathology and locomotor hyperactivity in experimental rodents. PM exposure is an inevitable and chronic event; therefore, dietary intervention with supplementation of functional phenolic compounds may be an ideal means to prevent and/or attenuate PM-induced pulmonary pathology and behavioral alterations. To understand whether EA pretreatment effectively attenuated PM-induced pulmonary pathology and hyperactivity, we used our previous pilot experimental conditions [20,21]. Briefly, mice were supplemented with vehicle control or EA (20 or 100 mg/kg) for 7 days, and then PM was instilled with continuous dietary interventions for the following 7 days. Pulmonary PM loading was a physical and inevitable event because EA pretreatment failed to prevent pulmonary PM accumulation and recruitment of immune cells in the BALF. EA pretreatment partially prevented PM-induced pulmonary cytokine and hypoxic mRNA expression and hyperactivity.
PM instillation significantly elevated PM loading in the lung and pulmonary inflammatory responses, similar to our previous findings [20,21]. Based on the histological evaluation, black materials from the PM were markedly accumulated in the alveolar lumen and interstitial tissue in all PMI groups, regardless of the EA pretreatment. In the BALF, PM instillation significantly induced infiltration of immune cells such as neutrophils and macrophages, as noted in previous publications [20,21]. Moreover, cytokine secretions in the BALF, such as those of IL-6 and TNFα, were remarkably elevated in all PMI groups. EA treatment did not significantly prevent IL-6 and TNFα induction in the BALF. Jeong et al. also reported that dietary intervention with quercetin did not prevent PM loading in the lung and cytokine secretion in the BALF [21]. Probably, the PM loading concentration in our experimental protocol was in excess, as evidenced by pulmonary PM loading; therefore, dietary intervention may not be sufficient to prevent pulmonary cytokine secretions in BALF.
However, EA pretreatment significantly attenuated PM-induced pulmonary cytokine and hypoxic mRNA expression in our experiments. As expected, PM instillation significantly induced the mRNA expression of pulmonary cytokines (Il-1b, Tnfα, and Il-6), as increased cytokine secretion was observed in the BALF. Moreover, the expression of hypoxic response genes (e.g., Ankrd37 and Vegfα) was markedly elevated in the PMI group. The induction of inflammatory and hypoxic responses verified our previous results [21]. However, EA treatment significantly reduced PM-induced inflammatory and hypoxic changes in mRNA expression. Key regulatory proteins for inflammatory and hypoxic responses are NF-κB and HIF1α, respectively, which are closely intertwined at the molecular level [50]. The NFκB and HIF1α pathways share a common molecular denominator, the IKK complex; therefore, the induction of NFκB by phosphorylation may trigger hypoxic signal induction of HIF1α, and vice versa. Our previous [21] and current findings suggest that dietary intervention with phenolic compounds (e.g., quercetin and EA) may attenuate PM-induced pulmonary inflammatory and hypoxic mRNA expression. In future studies, the expression of NFκB and HIF1α pathways should be scrutinized to understand whether dietary intervention can prevent PM-induced pulmonary inflammation and/or hypoxic events.
EA pretreatment significantly attenuated PM-induced locomotor hyperactivity in experimental mice. The PMI group had increased total, border, and center moving distances and mean speeds and increased maximum speed at the border compared with the CON group. Interestingly, EA pretreatment decreased the distinctive PM-induced hyperactivity by attenuating moving distances in total (EH + PMI), border (all EA treatments), and center (EH + PMI), mean speeds in total (EH + PMI), border (EH + PMI), and center (all EA treatments), and maximum speed in the border (EH + PMI). Previous findings using cohort studies have also demonstrated that PM exposure in early developmental periods triggers attention deficit hyperactivity disorder-like hyperactivity [14,51]. In addition, high-DEP exposure prenatally and 1 week after birth led to increased hyperactivity in experimental mice [18]. Moreover, maternal PM exposure significantly triggered hyperactivity in pups in a mouse model [19]. In this study, we demonstrated that PM instillation in relatively young adulthood (8~10 weeks) also increased locomotor activity in mice, consistent with our previous findings [20,21]. Interestingly, dietary intervention with phenolic components, such as EA and quercetin [21], successfully prevented PM-induced hyperactivity in experimental mice. An increased chance of inhalation of air pollutants is closely intertwined with an elevation in abnormal behaviors, such as depression, bipolar disorder, and schizophrenia [15]. Therefore, finding and applying functional dietary resources (e.g., EA and quercetin) as preventive measures against air pollutants may be a possible and sustainable strategy to maintain normal health.
Our current findings have significant advantages and disadvantages when extrapolating to the clinical field. Our experimental conditions included limited dietary intervention, PM exposure time, and PM concentration. Exposure of humans to PM may be long-term; however, our experimental protocol was executed in a relatively short-term period (14 days of dietary intervention and 7 days of PM exposure) with relatively higher concentrations of PM. Dietary intervention with phenolic compounds (EA and quercetin [21]) did not significantly prevent inflammatory cytokine secretion in the BALF. It seems that our experimental conditions may not fully account for potential pathological events and dietary interventions in humans. In addition, we detected hydrogen peroxide to gauge the pulmonary oxidative stress level in the BALF because prolonged inflammation may induce oxidative stress. Under hypoxic conditions, oxidative stress is generally elevated by ROS induction of reactive oxygen species [52]. Therefore, we postulated that hydrogen peroxide would be increased by PM exposure because of the induction of hypoxic Ankrd37 and Vegfa mRNA expression in the lungs. However, hypoxic mRNA expression in the lung and hydrogen peroxide secretion in the BALF did not match because hydrogen peroxide concentrations in the BALF were similar among all experimental groups. In future studies, we need to optimize the experimental conditions to make robust conclusions regarding whether PM exposure triggers pulmonary hypoxic responses. In addition, hyperactivity was noted in the PMI group, but EA pretreatment significantly normalized hyperactivity in mice. Our previous study used an identical experimental setting; quercetin also prevented PM-induced hyperactivity [21]. Therefore, we hypothesized that the stress hormone corticosterone would be altered by PM exposure; however, serum corticosterone levels were unchanged among all treatments, regardless of dietary intervention or PM treatment. Therefore, in the future, we may try to find any behavior-related hormones that are controlled by PM exposure and dietary intervention.
Although there are restrictions, there are numerous advantages to our experimental setting. In our current and previous experiments [21], in a relatively short period of time, we remarkably observed the preventive potency of EA and quercetin [21] against pulmonary inflammatory and hypoxic mRNA expression induced by PM exposure. Therefore, in the future, the application of optimized and lower PM concentrations to reflect current air pollution with longer experimental periods may result in the positive suppression of PM-induced infiltration of inflammatory cells and cytokine secretion in the BALF. Another promising finding was the behavioral alterations observed in our mouse model. Similar to other PM exposure models in the early life phases [18,19,53], we also found that PM exposure in early adulthood induced hyperactivity in mice. A relatively short period of dietary intervention with EA and quercetin [21] effectively normalized hyperlocomotive activity. Therefore, dietary intervention may be an acceptable approach for maintaining normal behavior amidst PM exposure.
EA is a widely accepted dietary polyphenol with multiple beneficial effects, especially in reducing biological inflammatory reactions [41,54,55,56]. In our experiments, EA pretreatment prevented PM-induced pulmonary cytokine mRNA expression over a relatively short period (14 days). Other studies have demonstrated that EA has significant efficacy in attenuating pulmonary inflammation, oxidative stress, and fibrosis (Table 2). In an acute lung injury (ALI) mouse model triggered by hydrochloric acid, oral EA treatment reduced neutrophil recruitment in the BALF and the lungs [40]. In this model, EA decreased the proinflammatory cytokine IL-6 and increased the anti-inflammatory cytokine IL-10 in the BALF [40]. In addition, EA treatment exerted an anti-inflammatory effect in an LPS-induced ALI model [48]. EA treatment also attenuated elastase-induced immune cells and cytokine secretion in the BALF in an emphysema model [45]. In a murine asthma model, EA treatment also prevented pulmonary inflammation by suppressing pulmonary NFκB activation [47]. Furthermore, EA has anti-inflammatory, antioxidative [44,46], and antifibrosis effects [46] in experimental rodents.
In this study, EA pretreatment significantly prevented PM-induced pulmonary inflammatory and hypoxic mRNA expression, along with the normalization of hyperlocomotive activity. However, inflammatory cytokine and hydrogen peroxide secretion in the BALF did not alter with either PM exposure or EA pretreatment. Our study is a novel endeavor in at least two aspects: [1] investigating the pulmonary pathophysiology of PM instillation and [2] investigating whether dietary intervention with EA could thwart PM-induced pathology. To date, dietary preventive means in PM-exposed animal experiments have just begun [21]; therefore, there is limited information on which experimental settings are suitable for potential clinical application. Our experimental period may have been relatively short, considering PM exposure in humans has a longer incidence. We also used a relatively higher PM concentration compared with those that humans are practically exposed to. Therefore, in future studies, we may optimize our experimental protocols by increasing the PM exposure duration and using lower PM concentrations. Although our experimental setting has some limitations, prevention of pulmonary inflammatory and hypoxic mRNA expression by EA pretreatment may also prevent PM-induced protein expression and function. Another obvious finding was that dietary intervention with EA pretreatment normalized PM-induced hyperactivity.
## 5. Conclusions
This study investigated the effectiveness of EA, a natural polyphenolic compound, in preventing the adverse effects of PM exposure in C57BL/6 mice. Four groups of mice were assigned (CON, PMI, EL + PMI, and EH + PMI); EA was orally administered for 14 days in C57BL/6 mice, and after the eighth day, PM (5 mg/kg) was intratracheally instilled for 7 consecutive days. The experimental results demonstrated that EA pretreatment with EA prevented PM-inducible pulmonary inflammatory and hypoxic mRNA induction, as well as hyperactivity in the experimental mice. This study suggests that EA may be a promising approach for mitigating the pathophysiological impacts of PM exposure. |
# Meta-Analysis of Exploring the Effect of Curcumin Supplementation with or without Other Advice on Biochemical and Anthropometric Parameters in Patients with Metabolic-Associated Fatty Liver Disease (MAFLD)
## Abstract
Metabolic (dysfunction)-associated fatty liver disease (MAFLD), previously known as non-alcoholic fatty liver disease (NAFLD), is the most common chronic liver disease. MAFLD is characterized by the excessive presence of lipids in liver cells and metabolic diseases/dysfunctions, e.g., obesity, diabetes, pre-diabetes, or hypertension. Due to the current lack of effective drug therapy, the potential for non-pharmacological treatments such as diet, supplementation, physical activity, or lifestyle changes is being explored. For the mentioned reason, we reviewed databases to identify studies that used curcumin supplementation or curcumin supplementation together with the use of the aforementioned non-pharmacological therapies. Fourteen papers were included in this meta-analysis. The results indicate that the use of curcumin supplementation or curcumin supplementation together with changes in diet, lifestyle, and/or physical activity led to statistically significant positive changes in alanine aminotransferase (ALT), aspartate aminotransferase (AST), fasting blood insulin (FBI), homeostasis model assessment of insulin resistance (HOMA-IR), total triglycerides (TG), total cholesterol (TC), and waist circumference (WC). It appears that these therapeutic approaches may be effective in alleviating MAFLD, but more thorough, better designed studies are needed to confirm this.
## 1. Introduction
Metabolic-associated fatty liver disease (MAFLD), formerly known as non-alcoholic fatty liver disease (NAFLD) is the most common chronic liver disease worldwide [1,2]. It is becoming a huge public health problem as its prevalence is on the rise, generating large costs (the estimated annual cost in *Europe is* EUR 35 billion, while it is EUR 89 billion in the US) [3,4]. MAFLD is characterized by excessive (>$5\%$ of liver weight) fat accumulation in hepatocytes that is not caused by a viral infection, alcohol consumption, or medication [5]. In addition, the condition coexists with other diseases or metabolic disorders such as being overweight or obese, type 2 diabetes or pre-diabetes, insulin resistance, dyslipidemia, or hypertension [6,7,8,9] and is also a factor that increases the risk of liver- and cardiovascular-disease-related mortality [10,11]. Another serious risk is the possibility of disease progression to non-alcoholic steatohepatitis (NASH), which may occur in 23–$44\%$ of MAFLD patients, resulting in fibrosis and even cirrhosis, which within 5–7 years leads to liver failure in 40–$60\%$ of cases and within 3–7 years to hepatocellular carcinoma (HCC) in 2.4–$12\%$ of patients [12]. In view of the potential progression of MAFLD and the high costs, early diagnosis, prevention, treatment of risk factors, and lifestyle modification are important (Figure 1) [3]. In 2016, the European Association for the Study of the Liver specifically recommended dietary changes and a gradual increase in aerobic exercise or resistance training as interventions leading to lifestyle changes in patients with MAFLD. The recommendations for diet and physical activity, among other reasons, are due to the fact that no effective pharmacological therapy is currently available [13].
The aim of this study is to review the effects of curcumin supplementation only or combined with dietary, physical activity, and/or lifestyle changes on biochemical and anthropometric parameters in the course of MAFLD. The indicated aim of the study is based on the existing knowledge of the pathomechanism of MAFLD and the known therapeutic properties of curcumin, whose (disease pathomechanism and curcumin properties) are described in the following section.
## 2. Pathophysiology
It is now recognized that factors leading to MAFLD include a poor diet, a sedentary lifestyle, and genetic and environmental factors. Therefore, the mechanisms that lead to the development of MAFLD are complex, leading to it being referred to as the “multiple hits hypothesis”. Although current knowledge points to specific factors leading to MAFLD, the exact pathomechanism is not yet fully understood. However, its basis is considered to be insulin resistance (IR), which results in increased de novo hepatic lipogenesis (DNL) and reduced inhibition of adipose tissue lipolysis, leading to the increased influx of fatty acids (FA) into hepatocytes and their storage as triglycerides. In addition, IR also causes dysfunction of adipose tissue resulting in altered production and secretion of adipokines and proinflammatory cytokines. High levels of free fatty acids, free cholesterol, and other lipid metabolites result in lipotoxicity, which is also an important part of the pathomechanism of MAFLD. This leads to increased levels of reactive oxygen species, which cause dysfunction of the endoplasmic reticulum and mitochondria. Changes in the intestinal microbiota may also be involved in the increased levels of free fatty acids, leading to increased permeability of the small intestine, which results in enhanced absorption of FA. This results in the activation of pro-inflammatory pathways and the release of pro-inflammatory cytokines such as IL-6 and TNF-α [6].
## 3. Curcumin
Curcumin is a polyphenol belonging to the group of curcuminoids. It occurs in the rhizomes of the plant called turmeric (Curcuma longa), which belongs to the ginger family. Turmeric is naturally found in Asia, mainly in India. It is mainly known for its culinary applications due to its taste, aroma, and intense yellow color. However, turmeric has been used in medicine for thousands of years due to its curcumin content [14]. Curcumin is characterized by many desirable properties. It has anti-inflammatory, antioxidant, and anticancer properties, among others [15]. Furthermore, importantly, it is safe and rarely causes adverse symptoms. For this reason, it is used to treat or support the treatment of many diseases, e.g., cardiovascular diseases, inflammatory bowel diseases, breast, stomach, pancreatic and lung tumors, dermatoses, allergic asthma, and liver diseases [16,17,18,19,20,21].
## 4. Material and Methods
The protocol for this systematic review and meta-analysis was based on the preferred reporting items of systematic reviews and meta-analysis (PRISMA) statement [22]. The design of the present work was fully specified in advance. It was registered in the PROSPERO (International Prospective Register of Systematic Reviews, CRD42022310950, https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=310950, accessed on 24 February 2023).
## 4.1. Types of Participants
Participants meeting the inclusion criterion of being adults (>18 years old) suffering from MAFLD were eligible for the study group. No additional criteria, such as gender or nationality, were defined. Participants were excluded from the study when they did not manifest MAFLD.
## 4.2. Types of Interventions
Interventions using curcumin supplementation only or curcumin supplementation with other changes e.g., diet and/or physical activity and/or other lifestyle modifications) and presenting the results of biochemical and/or anthropometric parameters’ outcomes before and after supplementation were included. Studies involving animals were excluded.
## 4.3. Types of Comparisons
There were no specific comparison criteria.
## 4.4. Types of Outcomes
The outcome of at least one biochemical parameter or anthropometric measurement is presented in the study, measured at baseline (pre-intervention) and post-intervention. The biochemical and anthropometric parameters measured in each study are presented in Table 1 and Table 2.
## 4.5. Types of Studies
Only randomized controlled trials published in peer-reviewed journals in English were included. The precise duration of the undertaken intervention was not specified. The exclusion criterion was a non-human study. The detailed PICOS criteria are described in Table 3.
## 4.6. Search Strategy and Study Selection
We reviewed available publications using databases such as PubMed, Web of Science, and Scopus using the words “NAFLD” or “MAFLD” “metabolic-associated fatty liver disease” or “non-alcoholic fatty liver disease” and “curcumin” or “turmeric”. We limited the results to papers in English and published by March 2022 (Figure 2).
## 4.7. Quality and Risk of Bias Assessment
An assessment of the quality of the studies meeting the inclusion criteria was performed using the Cochrane risk of bias tools. The following elements of the studies were analyzed: selection bias (random sequence generation and allocation concealment), performance bias (blinding of participants and personnel), detection bias (blinding of outcome assessment), attrition bias (incomplete outcome data), reporting bias (selective reporting), and other bias (Table 4) [37]. Funnel plots have been used to provide a visual assessment of the association between treatment estimate and study size. Publication bias was considered significant when the p-value was less than 0.05 in either Begg’s test [38] (Supplementary Figures S1 and S2).
## 4.8. Statistical Analysis
The meta-packages of R were used to perform the analyses [38,39,40,41]. A random effects model was conducted to estimate the pooled effect, for values of I2 ≥ $50\%$, while a fixed effects model was conducted for values of I2 ≤ $50\%$. The effect size was calculated as the mean difference (MD) changes from baseline along with $95\%$ confidence intervals (CI). A p-value < 0.05 was defined as statistically significant. The results of the conducted analyses are presented as a forest plot. The heterogeneity among the included studies was evaluated by the I2 statistic. An I2 value of >$50\%$ corresponds to high heterogeneity, values between 25–$50\%$ define heterogeneity as moderate, while I2 < $25\%$ indicates low heterogeneity.
Cytoscape (version: 3.8.1) was used to create network graphs presenting the studies’ results [42].
**Table 4**
| Unnamed: 0 | Random Sequence Generation | Allocation Concealment | Blinding of Participant and Personnel | Blinding of Outcome Assessment | Incomplete Outcome Data | Selective Reporting | Other Bias |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Rahmani, 2016 [23] | + | + | + | + | + | + | + |
| Kelardeh, 2017 [24] | +/− | + | + | + | + | + | +/− |
| Navekar, 2017 [25] | + | + | + | + | + | + | + |
| Chashmniam, 2019 [26] | + | + | + | + | + | + | + |
| Mirhafez, 2019 [27] | +/− | + | + | + | + | + | + |
| Hariri, 2020 [28] | + | + | + | + | + | + | + |
| Kelardeh, 2020 [29] | +/− | + | + | + | + | + | +/− |
| Saberi-Karimian, 2020 [30] | + | + | + | + | + | + | + |
| Panahi, 2016 [31] | +/− | + | − | − | + | + | + |
| Panahi, 2017 [32] | + | + | + | + | + | + | + |
| Jazayeri-Tehrani, 2019 [33] | +/− | + | + | + | + | + | + |
| Saadati, 2019 (a) [34] | + | + | + | + | + | + | + |
| Saadati, 2019 (b) [35] | + | + | + | + | + | + | + |
| Cicero, 2020 [36] | + | + | + | + | + | + | + |
## 5.1. Study Selection
In total, 221 studies were analyzed to meet the inclusion criteria. Ultimately, 14 randomized controlled trials were included in this meta-analysis. Table 1 and Table 2 show the characteristics of the trials included in the meta-analysis including a division by type of applied intervention.
## 5.2. Participant and Study Characteristics
Eight hundred and seventy-four NAFLD patients (429 in the treatment group and 418 in the control group) were included in the meta-analysis. Table 1 shows the characteristics of the studies in which only curcumin was supplemented. Table 2 presents the characteristics of studies in which, in addition to curcumin supplementation, physical activity and/or dietary advice and/or lifestyle changes were also used.
## 5.3. Interventions
In 8 of the study groups included in the analysis, only curcumin supplementation was used; in the other 8 studies groups, curcumin supplementation was also used, but combined with physical activity and/or dietary advice and/or lifestyle changes. Two studies out of fourteen used both of the abovementioned types of intervention; therefore, the total number of study groups amounted to 16. The duration of the intervention was 8 or 12 weeks. The doses of curcumin used ranged from 80–1500 mg/day. Detailed characteristics of the studies are presented in Table 1 and Table 2.
## 5.4. Effect of Curcumin Supplementation and Curcumin Supplementation with Physical Activity and/or Dietary Advice and/or Lifestyle Changes on the Levels of Biochemical Parameters
ALT and AST levels were controlled in nine studies [23,24,26,27,28,32,33,34,35]. Fasting blood insulin levels (FBI) and HOMA-IR were controlled in five studies [25,31,33,35,36]. TG, TC, and LDL-C levels were controlled in eight studies [23,26,27,30,31,33,35,36]. Waist circumference (WC) was controlled in seven studies [24,28,30,32,33,35,36].
Regarding ALT and AST, decreases in their levels were observed (Table 5, Figure 3 and Figure 4). The heterogeneity of the effect measures regarding ALT (I2 = $6.0\%$, $$p \leq 0.39$$) and AST (I2 = $17.5\%$, $$p \leq 0.28$$) was low.
Decreases were also observed for parameters related to glucose metabolism (FBI and HOMA-IR), Table 5, Figure 5 and Figure 6. The heterogeneity for FBI (I2 = $9.3\%$, $$p \leq 0.35$$) and HOMA-IR (I2 = $0\%$, $$p \leq 0.66$$) was low.
Decreases in levels were also observed among parameters related to lipid metabolism (TG, TC, and LDL-C), Table 5, Figure 7, Figure 8 and Figure 9. The heterogeneity for TG (I2 = $0\%$, $$p \leq 0.72$$) was low, but for TC (I2 = $64.6\%$, $p \leq 0.01$) and LDL-C (I2 = $70.8\%$, $p \leq 0.01$), it was high.
Among the anthropometric parameters, a reduction in WC was observed (Table 5, Figure 10). The heterogeneity for WC (I2 = $0\%$, $$p \leq 0.93$$) was low.
The network presented in Figure 11. summarizes the results from single random effects models. The size of circular nodes is proportionally related to the overall sample size of the intervention groups from studies included in the model assessing the effects of the intervention on the parameter. The color of the node denotes the parameters group (white for anthropometrical parameters, orange for liver-function-related parameters, ruby for c, red for blood pressure indicators, and turquoise for parameters related to glucose and insulin metabolism). The width of the arrows is proportional to the number of studies included into a model assessing the effects of the intervention on the particular parameter (k), which is also denoted as a label. The color of the arrows indicates the results of random effects models: light green arrows denote statistically significant beneficial effects of supplementation with curcumin with or without other advice on parameters, while light grey arrows denote statistically non-significant effects.
## 6. Discussion
Our meta-analysis summarizes the findings of fourteen RCTs that used curcumin supplementation or curcumin supplementation with physical activity and/or dietary advice and/or lifestyle changes. The studies conducted to date using curcumin indicate its many positive effects in the course of numerous diseases [14,44]. Physical activity, part of a broader lifestyle, also has many health benefits [45]. Recommendations from the World Health Organization (WHO) indicate that adults, in order to maintain optimal health, should perform at least 150–300 min of moderately intense exercise or 75–150 min of vigorous exercise weekly [46]. Diet is also an important component affecting health, and in the context of MAFLD, poor diet is one of the key elements leading to the development of the disease [6,47]. Due to the lack of an effective drug therapy for MAFLD, attempts are being made to use supplementation, diet, physical activity, and lifestyle changes as treatment, but it is not clear which combinations of the aforementioned elements of therapy would give the greatest effectiveness.
Our results indicate that in many studies, in addition to curcumin supplementation, patients with MAFLD were also advised on changing their diet and lifestyle or implementing physical activity. This is a very important fact in the context of interpreting the results of the studies, as each of these elements may additionally influence the change in the parameters studied making it impossible to unequivocally assess the efficacy of curcumin supplementation. Therefore, our meta-analysis highlighted the fact that, in the indicated studies, other types of interventions were used in addition to curcumin supplementation.
Our results suggest that curcumin supplementation or curcumin supplementation together with a combined change in dietary habits and/or implementation of physical activity and/or lifestyle changes causes a decrease in ALT, AST, FBI, LDL-C, TC, and TG and a decrease in HOMA-IR and WC levels.
The results of our study are in keeping with previous meta-analyses in several cases, but there are also a few differences in the results. In the case of ALT, our results are consistent with the studies of Ngu et al. [ 48], Goodarzi et al. [ 49], Yang et al. [ 50], and Jalali et al. [ 51], who also reported statistically significant decreases. In contrast, Wei et al. [ 52] obtained a decrease that was not statistically significant, but it should be noted that only two studies were included in the analysis. In the case of AST, our results are in accordance with all of the aforementioned publications [48,49,50,51,52], in which the authors also reported statistically significant decreases. FBI has so far been analyzed in two meta-analyses. Jalali et al. [ 51] reported a statistically significant decrease, which is consistent with our results, while Wei et al. [ 52] reported a decrease, but it was not statistically significant. For HOMA-IR, we reported a statistically significant decrease, similar to Yang et al. [ 50], Jalali et al. [ 51], and Wei et al. [ 52] in their meta-analyses. Among the lipid metabolism parameters (TG, TC, and LDL-C), decreases have been reported in previous meta-analyses, but they have not always been statistically significant. For TG, Yang et al. [ 50], Jalali et al. [ 51], and Wei et al. [ 52] also obtained statistically significant decreases in their analyses, while Ngu et al. [ 48] reported a statistically insignificant decrease. Regarding TC, Ngu et al. [ 48], Yang et al. [ 50], and Jalali et al. [ 51] obtained statistically significant decreases, which was also reported in our study. In contrast, Wei et al. [ 52] reported a statistically insignificant decrease in TC. For LDL-C, we obtained a statistically significant decrease, as well as Jalali et al. [ 51] and Wei et al. [ 52], while Ngu et al. [ 48] and Yang et al. [ 50] reported statistically insignificant decreases. Waist circumference has only previously been analyzed in the study of Baziar et al. [ 53], who obtained a statistically significant decrease.
Our study has several strengths. First, it provides evidence of the positive effects of curcumin supplementation and curcumin supplementation with added physical activity and/or dietary recommendations and/or lifestyle changes on the levels of certain blood biochemical parameters and waist circumference.
Our study includes more RCTs than most previously published meta-analyses and also highlights the fact that some studies used other interventions (dietary recommendations, physical activity, and lifestyle changes) in addition to curcumin supplementation, which, to the best of our knowledge, has been omitted in previous publications.
In opposition to the strengths, there are some limitations. Firstly, not all of the studies controlled for the same biochemical parameters. Secondly, there were differences in the duration of the different interventions. Third, the doses of curcumin used and the form of curcumin varied between studies. Fourth, additional recommendations for curcumin supplementation were not always described in detail, with the information being limited to only general information about their type. Fifth, the study groups, especially in some studies, were small.
## 7. Conclusions
This meta-analysis, based on RCTs, provides evidence that curcumin supplementation only or curcumin supplementation with physical activity and/or dietary advice and/or lifestyle changes lead to decreases in ALT, AST, FBI, LDL-C, TC, and TG in blood levels and a decrease in HOMA-IR and WC levels. However, this was due to the use of curcumin doses in the range of 80–1500 mg and additional recommendations, which were not always described in detail. The studies conducted to date do not clearly identify the appropriate dose of curcumin, either used alone or in combination with additional physical activity and/or diet and/or lifestyle recommendations. It is also not possible to determine, on the basis of current studies, the effect of the mentioned additional recommendations on the effect induced by curcumin and therefore also its dose.
Therefore, further well designed studies among MAFLD patients, using curcumin only and with additional recommendations, are needed. The effect of physical activity, diet and lifestyle on the effect induced by curcumin supplementation is also worthy of analysis. An element to be taken into consideration in future studies is the dose of curcumin. Other factors that may influence the results of the study, such as the diet, physical activity, lifestyle, or education of the patients participating in the study, should be considered and described in detail. It is important that the aforementioned interventions are detailed and communicated to the participants to ensure that the patients follow the recommendations with full understanding and according to the established rules, as this may affect the final results. Recommendations cannot be based on general indications, such as “increase physical activity” or “follow a healthy diet”, as this is a strong limitation of the study, with it not allowing for an accurate assessment of the impact of the applied interventions. Each recommendation should be precisely defined, preferably (where possible) in a measurable way, such as ‘30 min a day of walking 5 times a week’ or ‘consume 200 g of salmon per week’. Through the subjects following specific guidelines, it prevents variation in the interventions used, within the group, due to misunderstanding or patients’ own interpretation of the recommendations. Measurable recommendations also make it possible to assess the extent to which patients have followed them.
In conclusion, despite the limitations of the studies carried out to date, it seems that only curcumin supplementation or with the addition of physical activity and/or dietary advice and/or lifestyle changes can be helpful in the treatment of patients with MAFLD. |
# Sibling Resemblance in Physical Activity Levels: The Peruvian Sibling Study on Growth and Health
## Abstract
Physical activity is associated with a host of positive health outcomes and is shaped by both genetic and environmental factors. We aim to: [1] estimate sibling resemblance in two physical activity phenotypes [total number of steps∙day−1 and minutes for moderate steps per day (min∙day−1)]; and [2] investigate the joint associations of individual characteristics and shared natural environment with intra-pair sibling similarities in each phenotype. We sampled 247 biological siblings from 110 nuclear families, aged 6–17 years, from three Peruvian regions. Physical activity was measured using pedometers and body mass index was calculated. *In* general, non-significant variations in the intraclass correlation coefficients were found after adjustment for individual characteristics and geographical area for both phenotypes. Further, no significant differences were found between the three sib-ship types. Sister-sister pairs tended to take fewer steps than brother-brother (β = −2908.75 ± 954.31). Older siblings tended to walk fewer steps (β = −81.26 ± 19.83), whereas body mass index was not associated with physical activity. Siblings living at high-altitude and in the Amazon region had higher steps/day (β = 2508.92 ± 737.94; β = 2213.11 ± 776.63, respectively) compared with their peers living at sea-level. *In* general, we found no influence of sib-types, body mass index, and/or environment on the two physical activity phenotypes.
## 1. Introduction
Physical activity (PA) has been associated with a variety of positive health outcomes, generating transitional benefits from childhood through adolescence into adulthood [1,2], including decreases in obesity [3], cardiovascular disease, and diabetes [4], and increases in cognitive function as well as academic achievement [5]. Despite its recognized benefits, updated information on the prevalence and trends in PA [6,7] showed that the majority of children and adolescents worldwide are physically inactive, putting their current and future health at risk, and *Peru is* no exception [8]. For example, a recent global estimate from 146 countries showed that $81\%$ of children and adolescents aged 11–17 years were physically inactive [9], with the prevalence of insufficient physical activity in Peruvian youth increasing from $82.6\%$ in 2001 to $84.7\%$ in 2016. To reverse current trends, it is important to investigate what types of factors can effectively influence daily active play and PA behaviors in childhood and adolescence.
There is considerable variability among children in their level and patterns of PA, and this variability is shaped by a host of genetic [10,11,12] and environmental [13,14] factors. Behaviors such as PA are often influenced by household and family characteristics, as families often share common interests and experiences [15,16,17]. However, different family members also express a degree of autonomy when it comes to lifestyle behaviors, and sometimes variation among related subjects is also remarkable [18].
The study of siblings offers unique insights into their biology and behavior, given their relationships to each other, as well as to other family members. For example, siblings share a substantial fraction of their genes that are transmitted from their parents; in addition, siblings often grow and mature in similar environments including the household, school, and neighborhood contexts. In addition, they also differ in their chronological age, maturity status, sex, body composition, physical fitness, or lifestyle choices [19,20,21].
Genome-wide association studies (GWAS) have provided evidence that variation in PA is associated with polymorphisms in several genes [22,23,24]. However, several reviews of the extant literature do not identify specific genetic factors exclusively responsible for physical activity phenotypes [25]. On the other hand, variation in sibling resemblance depending on the sib-type and the phenotype has also been considered. For example, Pereira, et al. [ 26], using questionnaire data in Portuguese sibling pairs, showed that after adjustments for several covariates (biological, behavioural, familial, and environmental characteristics), sister-sister pairs demonstrated greater resemblance in their PA (ρ = 0.53) than brother-sister (ρ = 0.26) or brother-brother pairs (ρ = 0.18). In contrast, Jacobi, et al. [ 27] found no differences in correlations between siblings (all ρ = 0.28) when using PA data collected with pedometers.
Since PA is a complex and multifaceted trait, it has also been documented that a significant fraction of its variation can be explained by different environmental exposures throughout the lifespan [28], and this is particularly evident in developing countries like Peru. The distinct living settings of Peruvians have been recognized as a kind of “natural laboratory”, a singular territory that offers an opportunity to assess the impact of geographical variation on PA levels by combining settings on the spectrum of both rural-urban developments as well as lowland-highland scenarios. Peruvians are exposed daily to different natural stressors (e.g., altitude, temperature, pollutants), as well as social and economic inequalities (e.g., access to health care, quality of nutrition, access to public recreational infrastructure) in a unique geographical diversity [29], which can influence intrapair similarities in PA levels. To date, there is only one study focusing on physical fitness phenotypes in Peru, which concluded that both individual characteristics and geographical area of residence were significantly related to the magnitude of sibling resemblance as well as the mean levels of physical fitness [21].
Despite this recognition, to date there is no available evidence regarding variation in PA levels among Peruvian siblings, especially embracing the diversity of the three distinct geographical areas. Hence, using sibling data, as well as a multilevel statistical approach [30], we explored resemblance in PA levels among Peruvian siblings conditioned on the additive effects of their individual characteristics and shared natural environment. Specifically, we intend to: [1] estimate sibling resemblance in two PA phenotypes [total number of steps∙day−1 and minutes for moderate steps (min∙day−1)]; and [2] investigate the joint associations of individual characteristics (age and body mass index) as well as a shared natural environment.
## 2.1. Design and Participants
Our sample originates from The Peruvian Sibling Study on Growth and Health [31]. This study probes into sibling resemblance in body composition, physical fitness, physical activity, different facets of their motor development as well as gross motor coordination. A total of 247 biological siblings [(147 females and 100 males from 110 nuclear families ($67.2\%$ two siblings; $32.8\%$ three siblings)] were selected. All are native to three Peruvian geographical areas located at different altitudes: sea-level (Barranco = 58 m), Amazon region (La Merced and San Ramon = 751 m), and high-altitude (Junín = 4107 m). Only families that had two or three children, aged between 6 and 17 years, with complete PA data were considered in the present paper. Parents or legal guardians provided written informed consent. The project was approved by the Ethics Committee of the School of Physical Education and Sports, National University of Education Enrique Guzmán y Valle, Peru (UNE EGyV). Following their approval, all known siblings were invited to participate in the study.
## 2.2.1. Anthropometry
Body measurements were made according to standardized protocols [32]. Height was measured with a portable stadiometer (Sanny, Model ES-2060) holding the child′s head in the Frankfurt plane, to the nearest 0.1 cm; weight was measured with a digital scale (Pesacon, Model IP68), with a precision of 0.1 kg. Body mass index (BMI) was calculated using the standard formula: BMI = [weight(kg)/height(m)2].
## 2.2.2. Physical Activity
In order to objectively measure PA, we used pedometers, a body movement sensor that validly and reliably assesses PA among children and youth [33,34]. Pedometers have been used in different populations from different countries [35], and their validity has been studied [36]. Subjects used the Omron Model Walking style II pedometer (Omron Healthcare, Inc., Muko, Japan) over five consecutive days (three weekdays and two weekend days). These pedometers have a multiday memory function that automatically stores the total number of steps∙day −1 (a proxy measure of the total volume of PA), and the walking time, in minutes (min∙day −1), at a moderate or brisk pace in a day (a proxy measure of moderate-to-vigorous PA—this counts the amount of time spent walking at 3.0 METs or more) [37]. Siblings were instructed in the use of the pedometer, learning to remove it only for bathing and before sleeping at night. The devices were attached to the trouser belt (strap) using a clip, leaving the unit perpendicular to the ground. For the present study, only data from sib-ships with complete information from five consecutive days (Wednesday to Sunday) with an average of 12 h∙day −1 of pedometer use were considered.
## 2.2.3. Shared Environment Characteristics (Natural Environment)
Given the country’s heterogeneity in geographical terms, participants came from the three distinct regions located at different altitudes: sea-level, Amazon region, and high-altitude. Barranco (58 m) was the chosen city at sea-level and this is one of the 43 districts of Lima Province, located on the shore of the Pacific Ocean. The cities of La Merced and San Ramon (751 m) in the Chanchamayo district represented the Amazon region that is the largest in the Peruvian territory and occupies ~$60\%$ of its surface. The Junín district (4107 m) was used to represent the high-altitude location on the southern shore of Lake Junín or Chinchaycocha.
## 2.3. Data Quality Control
Data quality control was enhanced by all assessment team members being systematically trained by the lead researchers of the project to: (i) comply with the correct use of technical body measurement procedures; and (ii) instruct parents and children about the pedometer use protocol and persuade them to follow their regular PA routine. Further, IBM-SPSS v26 software was used to facilitate data entry and to cross-check data elements, employing automatic controls to ensure values were not outside known ranges.
## 2.4. Statistical Procedures
Analysis of the data was conducted in a sequential manner. We first performed data cleaning and initial exploratory analyses to identify outliers and check for normality of distributions. In order to normalize the distribution of the phenotype minutes for moderate steps (min∙day −1), a log transformation was applied and the sum of log-scores was computed. Descriptive statistics for all phenotypes i.e., means and standard deviations, were calculated. Differences between geographical residence areas were examined with analysis of variance (ANOVA). Along with the ANOVA, Tukey HSD tests were used for multiple comparisons. SPSS v26 software was used for all analyses, and the Type-I error rate was set at $5\%$. As sibling data are clustered, and since individuals are nested within their sib-ships (brother-brother BB, sister-sister SS, brother-sister BS), multilevel models were used for statistical analysis [38]. Separates within and between sib-ship variances were first estimated to comply with our first aim. As such, different intraclass correlation coefficients (ρ) with corresponding $95\%$ confidence intervals ($95\%$ CI) for each PA phenotype were computed. Further, based on the likelihood-ratio test, we compared a model that constrained ρ to be equal across sib-ship pairs (Null model) to a model that freely estimated ρ across sib-ship pairs (Model 1). The following models were henceforth estimated with the same or different ρ, depending on the result attained from the likelihood-ratio test.
For the second aim, the model was expanded (Model 2) to include individual variables such as age and BMI, with ρ being re-estimated for each sib-type. Finally, the full model (Model 3) included the geographical area of residence. For model comparison, the likelihood ratio test was used. Given that there are only three regions (sea-level, Amazon region, and high-altitude), as advocated, we did not treat region as a level in the multilevel model [39]. Instead, dummy variables were used to account for differences attributable to region in the fixed part of Model 3, with sea-level as the reference category. Continuous covariates were mean-centered, and sea-level BB pairs served as the reference category. For the multilevel analyses, STATA 14 software was used, with the Type-I error rate set at $5\%$.
## 3. Results
Table 1 shows descriptive statistics for all study variables. On average, no statistically significant differences ($p \leq 0.05$) were found among sib-ship pairs from the three geographical areas for chronological age and height. Further, siblings living in the Amazon region are heavier ($F = 5.55$, $p \leq 0.05$), have a higher BMI ($F = 13.55$, $p \leq 0.05$), and take more steps∙day−1 ($F = 21.52$, $p \leq 0.05$) compared with their peers from the other regions. On the other hand, sib-ships from the Amazon region spent fewer minutes for moderate steps (min∙day −1) compared to those at sea-level ($F = 3.53$, $p \leq 0.05$).
Table 2 provides estimates for the unadjusted and adjusted sibling’s correlations at each PA phenotype. For both phenotypes, Model 1 did not improve model fit relative to the Null model. Thus, there is insufficient evidence to reject the assumption of equal intraclass correlation for the three sib-pairs (BB, SS and BS). From the null model, total number of steps∙day−1 intraclass correlation = 0.44 ($95\%$CI = 0.31–0.58), and minutes for moderate steps (min∙day−1) intraclass correlation = 0.35 ($95\%$CI = 0.22–0.51). *In* general, the inclusion of individual characteristics (Model 2), as well as the different geographical areas, did not significantly influence the size of the intraclass correlations in both phenotypes. Additionally, for the minutes of moderate steps phenotype, the last model (Model 3) was not tested since Model 2 was not better than the previous model (Δ = −157.54, $$p \leq 0.43$$).
Table 3 shows the multilevel analysis results. Model 3 fit the data significantly better than Model 2 only for total number of steps∙day−1. *In* general, PA averages for BB pairs are β = 11,158.63 ± 1001.06; SS pairs tended to take fewer steps compared with BB (β = −2908.75 ± 954.31), while non-significant differences were found between BS and BB pairs ($p \leq 0.05$). Older siblings tended to take fewer steps (β = −81.26 ± 19.83, $p \leq 0.05$), whereas BMI was not statistically significant ($p \leq 0.05$). Further, siblings living at high-altitude and in the Amazon region tended to take more steps (β = 2508.92 ± 737.94, $p \leq 0.05$; β = 2213.11 ± 776.63, $p \leq 0.05$, respectively) compared with those living at sea-level.
## 4. Discussion
The present study is innovative in providing in-country PA data for Peru dedicated to siblings living at different altitudes with their specific socioeconomic characteristics, cultural disparities as well as built and natural environments. Our results showed that differences in Peruvian sib-ships resemblance in two PA phenotypes were mainly influenced, apparently, by genetic factors since non-significant differences were found in the intraclass correlation coefficients after adjustments for individual characteristics and geographical area of residence. Further, no significant differences were found between the three sib-ship types.
The available literature has reported varying results. For example, Jacobi, Caille, Borys, Lommez, Couet, Charles and Oppert [27], using French nuclear family data in conjunction with pedometer PA measurements, reported low correlations (ρ = 0.28) among siblings for the number of steps per day, although adjustments were only made for sex and age. On the other hand, Maia et al. [ 40], using the Baecke questionnaire in Portuguese family data, showed differences in a total PA phenotype between sib-types, with BB resembling more than SS and BS. Pereira, Katzmarzyk, Gomes, Souza, Chaves, Santos, Santos, Bustamante, Barreira, Hedeker and Maia [26], also using Portuguese siblings data and the same PA assessment tool, showed that with increasing levels of covariate adjustments, SS pairs showed stronger resemblance than BS and BB pairs. A similar trend was also found in a recent paper analyzing Peruvian sibling resemblances in physical fitness components. Significant differences across sib-types were only observed for waist circumference and handgrip strength, with BB correlations being higher than the SS or the BS correlations, after adjustments for individual characteristics (including age, height, body mass index, and maturity offset) and geographical area of residence [21]. In summary, we believe that correlation discrepancies between studies may be due to different sampling strategies, diverse covariate adjustments, different statistical techniques used to compute correlations, and the phenotypic expression as well as instruments used.
Some previous genetic studies have attempted to identify specific genes that may regulate PA [22,41]. However, this is not a straightforward task, as heritability estimates for PA have ranged from moderate to very high [10]. For example, in a review by de Vilhena e Santos, Katzmarzyk, Seabra and Maia [12], the authors reported genome-wide linkage data with markers near different PA related genes, while Lightfoot [24] indicated that only 2 candidate genes showed consistent associations in the regulation of PA—dopamine receptor 1 (Drd1) and helixloop helix 2 (Nhlh2). Further, recent GWAS indicated a genetic contribution to PA, with Doherty et al. [ 42] uncovering 14 loci for device-measured PA, while Klimentidis, et al. [ 43] identified multiple variants in habitual PA including CADM2 and APOE. Notwithstanding this progress, results are still unclear, most probably because of specificities in the production of genome maps in genome-wide linkage studies, uses of different methods to estimate PA, or the different ethnic composition of each sample.
Our multilevel model showed that PA averages for BB pairs are 11,159 steps∙day−1, which means that they tend to comply, on average, with the guideline recommendations for children and adolescents [44]. Consistent with our sibling data, chronological age has been negatively associated with PA [26,45]. In our study, for each year increase in sibling age, there was an average reduction of 81 steps∙day−1, whereas Pereira, Katzmarzyk, Gomes, Souza, Chaves, Santos, Santos, Bustamante, Barreira, Hedeker and Maia [26], based on self-reported PA, similarly revealed a decrease among Portuguese siblings. Using non-sibling data, Duncan, et al. [ 46] also showed a decline in the number of steps per day with age among New Zealand children and adolescents (15,284 weekday steps and 12,948 weekend steps at 5–6 years of age to 14,801 weekday steps and 10,656 weekend steps at 11–12 years of age). Using accelerometry, Alvis-Chirinos, et al. [ 47] also reported a decline in moderate-to-vigorous PA with age among Peruvian youth (1.354 min at 6–9 years to 1167 min at 10–13 years).
Our results also indicated dissimilarities in PA among siblings living in the three geographical areas, which potentially reflect the marked regional variations in terms of sociodemographic, economic, and cultural features. For example, in the city of Barranco, children are exposed to several built constraints like compact urban areas, large population centers, and extensive housing developments, with serious consequences in terms of traffic regulation, not to mention increases in public insecurity and environmental problems. Such local constraints can deprive children of playing freely in the community’s streets without parental supervision, as well as restricting access to public recreational and sports services. This may help to explain the likelihood of sibling pairs walking fewer steps compared with their peers from the other regions. In turn, in Chanchamayo and Junín, children tend to take more steps per day, probably because they find plenty of space for leisure and free playing, helping them to develop their abilities, deepen and widen their experiences, acquire further skills, and discover other interests. However, we could not find a published paper that investigated the links between natural environments (sea-level, Amazon region, or high-altitude) and siblings’ PA to make suitable comparisons.
Notwithstanding the importance of the present data, some limitations must be recognized. Firstly, without data indicating otherwise, it is possible that our sample is not representative of the overall Peruvian sibling population. Secondly, given the study design, genetic and environmental influences could not be estimated separately because no twins were involved. Thirdly, we made no adjustments for family socioeconomic status. While limited, this report also has several unique strengths. Firstly, the study involves a relatively large sample of siblings from three unique environmental contexts, although its size may not have sufficient power to detect putative interactions of different sib-types with their varying environments. Secondly, the study covered both childhood and puberty periods, expanding the range of potential influences from biological and environmental factors. Thirdly, the use of standardized and highly reliable objective methods for data collection makes significant contributions to the available literature. Finally, the use of a multilevel analysis model with individual and environmental data allows for approaching their interaction in the development of PA.
## 5. Conclusions
In conclusion, our model-based results revealed that, in general, there are no significant differences in the intraclass correlation coefficients for both PA phenotypes after adjustment for age and BMI as well as the geographical area of residence. Further, non-significant differences were found between the three sib-ship types. SS pairs tended to take fewer steps∙day−1 than BB, while non-significant differences were found between BS and BB pairs. Older siblings tended to walk fewer steps∙day−1, whereas BMI was not associated with PA. Further, siblings living at high-altitude and in the Amazon region tended to walk more steps∙day−1 compared with their peers living at sea-level.
Overall, our results highlight the significant sibling resemblance effects in explaining variance in PA, with genetic factors apparently being the most important legacy to explain dissimilarity, although environmental features must also be considered. |
# Oesophageal Atresia: Prevalence in the Valencian Region (Spain) and Associated Anomalies
## Abstract
The objective was to determine the prevalence of oesophageal atresia (OA) and describe the characteristics of OA cases diagnosed before the first year of life, born between 2007 and 2019, and residents in the Valencian Region (VR), Spain. Live births (LB), stillbirths (SB), and termination of pregnancy for fetal anomaly (TOPFA) diagnosed with OA were selected from the Congenital Anomalies population-based Registry of VR (RPAC-CV). The prevalence of OA per 10,000 births with $95\%$ confidence interval was calculated, and socio-demographic and clinical variables were analyzed. A total of 146 OA cases were identified. The overall prevalence was $\frac{2.4}{10}$,000 births, and prevalence by type of pregnancy ending was 2.3 in LB and 0.03 in both SB and TOPFA. A mortality rate of $\frac{0.03}{1000}$ LB was observed. A relationship was found between case mortality and birth weight (p-value < 0.05). OA was primarily diagnosed at birth ($58.2\%$) and $71.2\%$ of the cases were associated with another congenital anomaly, mainly congenital heart defects. Significant variations in the prevalence of OA in the VR were detected throughout the study period. In conclusion, a lower prevalence in SB and TOPFA was identified compared to EUROCAT data. As several studies have identified, an association between OA cases and birth weight was found.
## 1. Introduction
Oesophageal atresia (OA) is a disorder characterized by an interruption in the continuity of the oesophagus, with or without tracheoesophageal fistula (TEF), which communicates with the trachea [1]. It is the most frequent congenital anomaly (CA) of the oesophagus [2]. CAs are structural or functional abnormalities that are present from birth, although they can manifest at later periods, and constitute a diverse group of conditions of prenatal origin that may be due to single gene defects, chromosomal abnormalities, multifactorial inheritance, environmental teratogens, or lack of micronutrients [3].
Most CAs are considered rare diseases due to their low prevalence (in Europe, less than five cases per ten thousand inhabitants). Rare diseases, including those of genetic origin, are chronically debilitating, disabling, and even life-threatening conditions [4]. The global incidence of OA varies from $\frac{1}{2500}$ to $\frac{1}{4500}$ live births (LB) [2]. The prevalence of OA for the period 2007–2019 in the European network of population registries for the epidemiological surveillance of CA (EUROCAT) is $\frac{2.63}{10}$,000 births, remaining stable during the last decades but with a slight variation between European regions [5].
OA can be present in association with other CAs, generally those included in the VACTERL association, such as vertebral defects, anal atresia, cardiac malformations, TEF, renal anomalies, and limb malformations [6]. The etiology of this disease remains unknown, although it has been linked to genetic and environmental factors. The most associated genetic factors are trisomies, such as Down, Edwards or Patau syndrome, as well as alterations of a single gene, such as CHARGE and Feingold syndromes or Fanconi anemia, among others. Among the environmental factors, maternal exposure to drugs such as alcohol and tobacco, the use of in vitro fertilization techniques and gestational diabetes mellitus stand out [7].
OA is differentiated into types depending on its location and the presence or absence of TEF. The tenth revision of the international classification of diseases with the extension of the British Pediatric Association (ICD10-BPA) used by EUROCAT, classifies OA as OA without mention of a fistula or without other specification (code Q39.0) and OA with TEF (code Q39.1). According to Vogt’s classification [8], in $86\%$ of the cases of type III OA or with distal TEF are detected, in $7\%$, type I or without associated TEF, in $4\%$, type V or with TEF without atresia and, with less frequency, type II OA or with proximal TEF, and type IV or with proximal and distal TEF (<$1\%$) [8]. A comparison of both classifications is presented in Figure 1.
The diagnosis is usually done in the first 24 h of life and may be suspected in the presence of hypersalivation or the inability to swallow saliva. During the prenatal stage, the presence of polyhydramnios or the absence of stomach bubbles, normally observed between 16 and 20 weeks of gestation (GW), can be considered predictive factors. Another warning sign is the dilation of the atretic blind fundus detected during swallowing in the third-trimester ultrasound [9]. Diagnosis confirmation is obtained by a chest and abdominal X-ray demonstrating the abnormality [10].
Case mortality is directly related to low birth weight and major congenital heart defects [1]. Factors such as prematurity can have a negative influence, increasing the mortality of cases; however, the presence of TEF and the existence of other associated anomalies have not been shown to increase mortality [11].
The CA population-based registry of VR (RPAC-CV), which is part of EUROCAT [12], collects information on those diagnosed with OA before the first year of life, and who are residents in the VR. Based on these data, a study was performed to determine the prevalence of OA in the VR and describe the characteristics and distribution of cases with OA born between 2007 and 2019 in the VR.
## 2. Materials and Methods
A cross-sectional study was performed on cases diagnosed with OA before the first year of life, with or without TEF, born during the period 2007 and 2019 in the VR. The VR is one of seventeen regions of Spain, with a population of approximately 5 million and an annual number of births around 45,000.
The RPAC-CV was used as a source of information, from which the cases with a confirmed diagnosis of OA were obtained, coded with the codes Q39.0 and Q39.1 of the ICD10-BPA. The inclusion criteria used were those marked by EUROCAT that consider as cases all those residing in the VR who present at least one major CA [12]. The study subjects were LB, stillbirths (SB), and termination of pregnancy for fetal anomaly (TOPFA), diagnosed prenatally or during the first year of life.
The variables included in the analysis were those related to the case, to the CA, and to the pregnant woman.
Regarding the statistical analysis, firstly, the prevalence per 10,000 births and its $95\%$ confidence intervals ($95\%$CI) were calculated for the whole period and for each year. In addition, the prevalence by the type of pregnancy ending was calculated. The distribution of cases by sex and weight at birth was obtained. Birth weight in LB was divided according to the classification recommended by the World Health Organization (WHO) [13]: very low birth weight (VLBW) if ≤1500 g, ≤2500 g low birth weight (LBW), >2500 g–3999 g normal weight, and ≥4000 g macrosomic [13], and the mean weight of LB cases at birth was obtained.
The frequency of OA was described according to the number of babies at the pregnancy ending, as well as by gestational age, classifying the cases as less than 28 GW, 28–32 GW, 33–36 GW, and 37 GW or more [14]. The mean gestational age at pregnancy ending was obtained for all cases, including SB and TOPFA.
In LB cases who died during the first year of life, the time elapsed from birth to death was calculated, obtaining the median of the days elapsed. The overall crude mortality rate out of 1000 births and groups according to birth weight were obtained. In addition, the frequency of cases that required some surgical procedure during the first year of life was calculated. A Fisher exact test was carried out to study the relationship between birth weight categories and death of the cases (yes/no).
Moreover, the frequency of cases according to the type of OA was determined. Once the RPAC-CV cases coded according to the ICD10-BPA were obtained, they were adapted to the Vogt’s classification [8] using the literal of the diagnosis that includes the location of the TEF (type V was not taken into account in this study because it does not include OA diagnosis). In addition, the CA and syndromes most frequently associated with OA were identified and grouped by subgroups according to EUROCAT [12], and their frequency was analyzed. The frequency of cases was calculated according to the time of diagnosis, as well as the mean gestational age of the first CA in cases with prenatal detection.
The number of cases conceived by assisted conception and the frequency of pregnant women with a history of spontaneous abortions and previous TOPFA were studied, as well as the frequency of maternal diseases before and during pregnancy. The drugs used during the first trimester of pregnancy were classified according to the groups of the anatomical-therapeutic-chemical (ATC) classification [15], and the country of birth of each pregnant woman was determined.
Finally, the distribution of cases and the prevalence according to the mother’s residence by provinces of the VR were analyzed to describe their geographical distribution.
Statistical analysis was performed using the IBM SPSS Statistics 22 software by applying the chi-square statistical test for qualitative variables and Student’s T test for quantitative variables, to detect statistically significant differences.
## 3. Results
A total of 146 OA cases were identified during the period from 2007 to 2019 in the RPAC-CV. The overall period prevalence was $\frac{2.4}{10}$,000 births ($95\%$CI: 2.0–2.8), being 2016 the year with the highest prevalence ($\frac{3.8}{10}$,000 births), and 2014 the one with the lowest ($\frac{1.6}{10}$,000 births). Figure 2 shows the evolution of the annual prevalence for each year of the study period.
Regarding the distribution by type of pregnancy ending, $97.3\%$ (142 cases) of the OA cases were LB, $1.4\%$ (2 cases) were SB, and $1.4\%$ (2 cases) corresponded to TOPFA. The prevalence by type of pregnancy ending was $\frac{2.3}{10}$,000 births ($95\%$CI: 2.0–2.7) for LB and $\frac{0.03}{10}$,000 births ($95\%$CI: 0.0–0.1) for both SB and TOPFA. Table 1 shows the annual prevalence of OA cases, with or without TEF, according to the type of pregnancy ending.
The distribution by sex of the OA cases was $57.5\%$ male and $41.1\%$ female. In $1.4\%$ of the cases, corresponding to TOPFA, this information was unknown.
Regarding the weight of the LB cases, a mean birth weight of 2426 ± 0.963 g was obtained. In addition, $10.3\%$ had VLBW, $41.8\%$ LBW, $43.8\%$ normal weight, and $0.7\%$ were macrosomic. In $0.7\%$ of cases, this information was unknown.
It was observed that $6.8\%$ of the cases corresponded to twin pregnancies and $2.1\%$ to triple gestations. The rest of the cases were single pregnancies ($91.1\%$). The mean gestational age at the time of pregnancy ending was 36.7 ± 6.3 GW, and it was identified that $47.3\%$ of cases were between 33–36 GW, $38.4\%$ 37 GW or more, $10.3\%$ between 28–32 GW and, finally, $3.4\%$ less than 28 GW. The GW at pregnancy ending was unknown in $0.7\%$ of the cases.
Considering only LB cases, $13.7\%$ died before one year of age. The median number of days elapsed between birth and death was 6 days. A crude mortality rate during the first year of life of 0.03 per 1000 births was observed for the period 2007–2019. In relation to the weight at birth, of the 20 cases dead, $85.0\%$ had VLBW or LBW (Table 2). Mortality in LBW cases was higher than in normal weight cases, with a crude mortality rate of 0.01 per 1000 births for VLBW, 0.02 per 1000 births for LBW, and 0.005 per 1000 births for normal weight. The mortality rate during the period is shown in Figure 3. A Fisher’s exact test was performed to study the relationship between the weight categories at birth and the death of cases, obtaining a statistically significant association ($p \leq 0.05$).
In $88.7\%$ of LB cases, at least one surgical procedure was performed during the first year of life, while surgery was not required in $1.4\%$, and in $0.7\%$ surgery was not needed because it was considered too severe for the procedure. In $9.2\%$, this information was unknown.
Concerning the type of OA, according to Vogt’s classification [8] or location of the TEF, a higher frequency of OA with distal TEF or type III was detected, followed by OA without TEF or type I (Table 3).
Of the 146 cases, $71.2\%$ had another CA associated. A total of 334 associated malformations were identified since more than one different associated anomaly was identified in some of the cases studied. Most of these malformations corresponded to congenital heart defects (Table 4). The relationship between congenital heart disease (yes/no) and case mortality (yes/no) was studied using the chi-square test, where no statistically significant relationship was obtained ($p \leq 0.05$).
In addition, OA was detected to be associated with syndromes or associations of malformations in $15.8\%$ of the total cases. Among these, the most frequent was the VACTERL association (in $43.5\%$ of cases), followed by Edwards syndrome ($26.1\%$). In third place, the Polymalformative syndrome ($13.0\%$) was found, and finally, the Patau, Crouzon, Cri-du-chat syndromes, and the CHARGE association, with $4.3\%$ each.
According to the time of diagnosis of the first CA in each case (it can be the OA or another associated CA), in $58.2\%$ of cases it was detected at birth, and in $37.7\%$ it was diagnosed prenatally. In $2.1\%$, it was seen during the first week of life, and in $2.1\%$ of the cases this information was unknown.
In those diagnosed prenatally, ultrasound was the predominant diagnostic technique during the prenatal stage. In $43.6\%$ of cases, the first malformation was detected in the third trimester of pregnancy, $30.9\%$ during the second trimester, and $3.6\%$ during the first trimester (Table 5). In $21.8\%$ of cases, this information was unknown. The mean gestational age at prenatal diagnosis was 27.2 ± 6.5 GW.
The mean age of the pregnant women at the time of pregnancy ending was 32 years (with a range of 18 to 47 years). A total of $13.0\%$ of the pregnancies were conceived by assisted conception. The relationship between assisted conception and the type of OA was studied using the chi-square test, where no statistically significant differences were identified ($p \leq 0.05$). On the other hand, $21.2\%$ of the pregnant women had a history of spontaneous abortions and $11.0\%$ of previous TOPFA.
Endocrine diseases ($11.6\%$), such as hypothyroidism, obesity, and hyperlipidemia, were the most frequently observed medical diseases before pregnancy in the pregnant women, followed by a personal history of CA ($5.5\%$), such as kidney abnormalities disease, congenital heart disease, and pleural abnormalities. Likewise, gynecological pathologies ($4.8\%$), infections ($4.1\%$), hereditary genetic diseases ($3.4\%$), respiratory ($2.0\%$), psychiatric, vascular, digestive, and allergies were found, with a frequency of $1.3\%$ each.
In addition, $37.7\%$ of the pregnant women presented some pathologies during pregnancy. Specifically, a total of 67 diseases were diagnosed, in some cases the pregnant women had more than one disease. Polyhydramnios ($20.0\%$), gestational diabetes mellitus ($18.5\%$), hypothyroidism ($10.8\%$), urinary tract infections ($10.8\%$), and gestational hypertension ($7.7\%$) were more frequently observed. A total of $26.7\%$ of the pregnant women did not have gestational diseases, and there was no information available in $35.6\%$ (Table 6).
A total of $37.0\%$ of the pregnant women took drugs during the first trimester of pregnancy, while in $45.9\%$, this information was not available. The most used drugs were antibiotics, mainly clindamycin in suppositories and ampicillin, followed by antithyroid drugs, vitamin and pregnancy supplements, corticosteroids, and antihypertensives (Table 7).
Regarding the country of birth of the pregnant women, $56.8\%$ were Spanish born and $17.8\%$ were foreigners. Among the most frequent foreign countries of birth were Moroccan, Romanian, and Bolivian origin. The country of origin was unknown in $25.4\%$ of the pregnant women.
Regarding the geographical distribution according to the maternal residence, it was observed that $49.3\%$ of the pregnant women resided in the province of Valencia, $43.8\%$ in the province of Alicante, and $6.8\%$ in the province of Castellón. The prevalence by provinces for the study period was $\frac{2.9}{10}$,000 births ($95\%$CI: 2.2–3.7) in Alicante, $\frac{2.3}{10}$,000 births ($95\%$CI: 1.8–2.8) in Valencia, and $\frac{1.4}{10}$,000 births ($95\%$CI: 0.5–2.2) in Castellón.
## 4. Discussion
The overall prevalence of OA cases obtained in the VR for the period 2007–2019 was more similar than EUROCAT [5]: $\frac{2.3}{10}$,000 births for the same period. It was also more similar than other population-based registries of CA which belong to EUROCAT [5], such as the Basque Country (Spain), whose prevalence was $\frac{2.5}{10}$,000 births. In the case of Norway, a European country whose population is quite comparable to that of VR and is also part of EUROCAT [5], we could find a prevalence only slightly higher ($\frac{2.8}{10}$,000 births) than that in VR [5].
Furthermore, studies such as the one by Nassar et al. [ 16], whose cases, classified using the ICD9-BPA or ICD10-BPA which belonged to members of birth defects surveillance programs in North America, South America, Europe, and Australia, found a global prevalence of OA similar to that obtained in the RPAC-CV: $\frac{2.4}{10}$,000 births during the period 1998–2007 [16].
In VR, significant variations were detected in the annual prevalence of OA cases throughout the study period, with the lowest prevalence being in 2014 and the highest in 2016.
The prevalence by type of pregnancy ending of OA cases identified in EUROCAT [5] during the period 2007–2019 was higher than that obtained in the VR both in SB ($\frac{0.06}{10}$,000 births) and in TOPFA ($\frac{0.13}{10}$,000 births).
Concerning the sex of OA cases in the VR, a slight male predominance was detected, in agreement with what was described in the work of Vara Callau et al. [ 8], in which a ratio of 1.5:1 was found. However, the frequency of twin and triple pregnancies was lower in the VR than that found by these authors [8].
Regarding the time of diagnosis in cases of OA of the VR, $37.7\%$ were detected prenatally, this value being higher than that found by Sfeir et al. [ 11], which described, for the period 1998–2007, a prenatal diagnosis in $30\%$ of cases [11]. It is important to remark that Sfeir describes only prenatal OA diagnosis, and in VR, any first CA diagnosis is included, which is not necessarily OA. Advances in the technique applied to prenatal tests may be the reason for this increase. On the other hand, coincidences have been found in the time elapsed until the moment of diagnosis in the cases detected postnatally, both being within the first 24 h of life [17].
The mean gestational age at the time of pregnancy ending in OA cases in the VR was slightly lower than that described by Vara Callau [8], 36.7 ± 6.3 GW vs. 37.1 ± 2.6 GW, respectively [8]. A total of $61.0\%$ of OA cases of the RPAC-CV ended the pregnancy with less than 37 GW, a much higher frequency than that found in the general population, where a prematurity rate of $8.3\%$ had been described [18]. A total of $52.1\%$ of the cases were low birth weight (including VLBW and LBW), a higher value than that described by Galarreta et al. [ 19], where it was found that $49.6\%$ of the cases were VLBW and LBW at birth [19]. When comparing the frequency of cases with VLBW, $10.3\%$ found in our sample contrasts with $8.6\%$ described in the aforementioned study [19].
Moreover, in OA cases in the VR, a higher percentage of male sex, low weight, and preterm gestational age (≤36SG) at the time of pregnancy ending were identified in comparison to all the cases with AC from the RPAC-CV during the same period of study [20].
According to others studies [1,21], the mortality of OA cases is directly related to birth weight and associated heart defects. In the VR, statistically significant differences in birth weight and mortality in LB cases were found. Congenital heart defects were the most frequent ones associated with OA in VR. However, no statistically significant differences were found between congenital heart disease and OA cases mortality [1].
The association of OA with other CAs has been repeatedly described in different studies [1,22], suggesting the need to look for associated malformations before diagnosing OA. A higher frequency of CA associated with OA was found in VR ($71.2\%$) compared to the $50\%$ described by De Jong et al. [ 7]. Among the associated CA, the author [7] describes a frequency of $10\%$ of cases related to some component of the VACTERL association, a higher frequency than that found in VR ($6.2\%$), although also prevailing over other associated syndromes.
In the OA cases of VR, $15.8\%$ were associated with syndromes or associations of malformations and $6.9\%$ with chromosomal abnormalities, equivalent to that described in the literature [22] and lower than that observed by Galarreta et al. [ 19], which describes $10.2\%$ of cases associated with chromosomal abnormalities. The main chromosomal abnormality related to the RPAC-CV cases was Edwards syndrome ($4.1\%$), with a lower frequency than that described by Felix et al. [ 23] but prevailing over the rest of the chromosomal abnormalities.
In VR, the frequency of cases with type III OA was lower than in similar studies [9]; however, the frequency of cases with OA type I was higher than that found in these studies [9]. This may be due to the fact that we have a high percentage of OA cases with TEF without specifying the location, and, according to the literature [9], it would be expected that they were mainly type III.
In addition, in pregnant women with OA cases from the RPAC-CV, $21.2\%$ of the history of previous spontaneous abortions and $11.0\%$ of the history of previous TOPFA were found, coinciding with the $20\%$ of the history of spontaneous abortions that are described in the literature [24] and with $11.7\%$ of the history of TOPFA in Spanish public hospitals in 2015 [25].
Gestational diabetes mellitus has been associated with the appearance of CA, macrosomia, neonatal complications, and a high percentage of perinatal mortality [17]. In VR, it was found that $8.2\%$ of pregnant women with OA cases developed gestational diabetes during pregnancy, a higher incidence than that described in the literature [17], where an incidence between $1\%$ and $5\%$ of pregnancies was estimated [17]. In addition, a $5.5\%$ personal history of CA was observed in pregnant women, a percentage that coincides with that described by Spitz [26], who also describes a similar proportion of a history of CA in first-degree relatives with one or more components of the VACTERL association [26].
The limitations of the study may be due to the small number of cases intrinsically associated with OA as it is a rare disease, which could only be expanded by studying a more extended period or expanding the study territory. Another limitation is the lack of information found in some of the clinical variables under study, which will foreseeably improve over time since the recent implementation of the electronic medical record in the Spanish health system, which seems to be increasing the quality of health data and its collection [16].
## 5. Conclusions
In conclusion, the global prevalence of OA cases obtained in the RPAC-CV was similar to EUROCAT ($\frac{2.3}{10}$,000 births) for the same period. However, EUROCAT identified a higher prevalence in SB ($\frac{0.06}{10}$,000 births) and TOPFA ($\frac{0.13}{10}$,000) than those obtained in VR. OA is a CA whose mortality is influenced by factors such as birth weight. In many cases, the OA is associated with other CAs, mainly congenital heart defects. The appearance of TEF is quite frequent, being type III OA the one that prevails. Although the prenatal diagnosis of OA has increased over time, detection at birth continues to be more frequent. |
# Acetylsalicylic Acid Effect in Colorectal Cancer Taking into Account the Role of Tobacco, Alcohol and Excess Weight
## Abstract
Excess weight, smoking and risky drinking are preventable risk factors for colorectal cancer (CRC). However, several studies have reported a protective association between aspirin and the risk of CRC. This article looks deeper into the relationships between risk factors and aspirin use with the risk of developing CRC. We performed a retrospective cohort study of CRC risk factors and aspirin use in persons aged >50 years in Lleida province. The participants were inhabitants with some medication prescribed between 2007 and 2016 that were linked to the Population-Based Cancer Registry to detect CRC diagnosed between 2012 and 2016. Risk factors and aspirin use were studied using the adjusted HR (aHR) with $95\%$ confidence intervals (CI) using a Cox proportional hazard model. We included 154,715 inhabitants of Lleida (Spain) aged >50 years. Of patients with CRC, $62\%$ were male (HR = 1.8; $95\%$ CI: 1.6–2.2), $39.5\%$ were overweight (HR = 2.8; $95\%$ CI: 2.3–3.4) and $47.3\%$ were obese (HR = 3.0; $95\%$ CI: 2.6–3.6). Cox regression showed an association between aspirin and CRC (aHR = 0.7; $95\%$ CI: 0.6–0.8), confirming a protective effect against CRC and an association between the risk of CRC and excess weight (aHR = 1.4; $95\%$ CI: 1.2–1.7), smoking (aHR = 1.4; $95\%$ CI: 1.3–1.7) and risky drinking (aHR = 1.6; $95\%$ CI: 1.2–2.0). Our results show that aspirin use decreased the risk of CRC and corroborate the relationship between overweight, smoking and risky drinking and the risk of CRC.
## 1. Introduction
Colorectal cancer (CRC) is the third leading cause of cancer death globally and the second in Europe, and its incidence is steadily rising in developing nations [1], with nearly 520,000 new cases in Europe in 2020 [2], even though a large proportion of these case are highly preventable [3]. A study in nine European countries found that approximately $20\%$ of CRC cases may be related to overweight, smoking and risky drinking [4]. In contrast, studies have shown that long-term aspirin use may prevent CRC [5,6].
Shaukat et al. found a direct relationship between the body mass index (BMI) and long-term CRC mortality and suggested that BMI modulation may reduce the risk of CRC mortality [7]. A recent study has shown the role of obesity and overweight in early-onset CRC, and concluded that obesity is a strong risk factor [8]. Ghazaleh Dashti et al. found an association between risky drinking and an increased risk of CRC [9]. Likewise, a study has suggested an association between passive smoking and the risk of CRC [10].
Some studies have found a protective effect of aspirin against CRC [11] and various studies have concluded that aspirin reduces the overall risk of CRC recurrence and mortality and colorectal adenomas. Ma et al. recently found that aspirin, including low-dose aspirin, reduced the risk of CRC [12]. A recent study by Zhang et al. on the effect of aspirin use for 5 and 10 years found that the continuous use of aspirin increases the protective effect on CRC [13]. A Danish study also found that the continuous use of low-dose aspirin was associated with a reduced CRC risk [14]. Some studies have shown differing results on the protective effect of aspirin due to the different designs used, the type of follow-up, the recorded aspirin consumption and the size and type of population. Although the data seem compelling, a limitation of these analyses is that they do not take into account risk factors for CRC [15]. These previous studies investigated the association between the use of aspirin and CRC, but they did not study the role played by risk factors such as tobacco smoking, alcohol or excess weight. In this study, we explore how these factors, combined with aspirin use, affect the risk of CRC in a particular society.
The objective of this study was to determine the protective effect of aspirin against CRC, taking into account the effect of other risk factors (overweight/obesity, risky drinking and smoking), in Lleida, a province in Catalonia, Spain, with a large rural population and an agri-food industry that may present specific risk factors [16,17].
## 2.1. Study Population
We conducted a retrospective cohort study of aspirin use and risk factors to analyze the impact of these factors on the risk of CRC. We carried out the study on 154,715 inhabitants of Lleida aged >50 years at the start of the study period, with data available on aspirin use from 1 January 2007 to 31 December 2016 in the Catalan Health Service (CatSalut) system. The reason for selecting this period was to ensure that those CRC cases detected in 2012 had the opportunity to be exposed to aspirin for at least five years. This population was linked to the Lleida Population-based Cancer Registry to detect CRC diagnosed between 2012 and 2016.
Data on aspirin use were obtained from the number of packages dispensed by pharmacies. Catalonia has a public health system in which medicines are dispensed in pharmacies after presenting a doctor’s prescription. Drugs administered to hospitalized patients and those prescribed by private providers are not registered in the CatSalut system, and therefore were not included in this study. The CRC cases in the sample were obtained from the Lleida Population-Based Cancer Registry, and the demographic characteristics of participants, including age and sex, were obtained from the CatSalut system. Figure 1 shows a flowchart of the study population. Initially, the pharmacy database registered 724,070 inhabitants with any prescription, although 346,365 were excluded because they did not reside in the Lleida region. Another exclusion criterion was age. We only included inhabitants aged >50 years at the start of the observed period [2007], resulting in 154,717 inhabitants. We also excluded those inhabitants who did not register the risk factors correctly, although the cases excluded were minimal.
As has been presented before, this study included different databases. To enable this linkage, it was necessary to use a personal identification code called CIP. This code is unique to each inhabitant who resides in Catalonia and permits us to identify them in the Catalan Health Service and its registers (hospitals, pharmacies or primary care centers).
## 2.2. Data Collection
Data on CRC diagnoses were obtained from the Lleida Population-Based Cancer Registry using five consecutive years of incidence data, from 2012 to 2016. This period was chosen as the available years validated by the professionals of the register. Potential CRC cases were validated by checking medical records. We used hospital and pathological anatomy records as the main information sources. Cancers were identified following the rules defined by the International Association of Cancer Registries, the International Association for Research on Cancer and the European Network of Cancer Registries.
The risk factors included were risky drinking, smoking and body mass index. This information was extracted using the eCAP software (V 20.4.3) used by primary care physicians to record all patient information, which registers information from 2001. The values of these variables at the time that this study started were obtained. Body mass index (BMI) was calculated by the weight and height of the patient using the formula BMI=weightkg/height(m)2 and categorized as follows: 18.5–24.9 normal weight, 25–29.9 overweight and >30 obesity [18]. The ICD-10 international criteria identified risky drinking and smoking. The ICD-10 code for risky drinking is F10.2, and those for smoking are F17 (mental and behavioral disorders due to tobacco use) and Z72 (tobacco use). Risky drinking was defined as consumption of >40 g/day of alcohol in men and >24 g/day in women [19]. The Spanish Health Ministry defined these grams per day with the supervision of the WHO [20]. The software also states the date of smoking onset. Smokers were defined as exposure for >5 years before the start of the study. The reason for using a period of five years was due to a previous study that suggested that this period might increase the risk of cancer [21]. Former smokers were considered smokers because the observed points in the dataset were minimal, and adding this new category could have imbalanced the dataset. General characteristics are represented in Table 1.
## 2.3. Exposure
Aspirin was categorized according to the Anatomical Therapeutic Chemical (ATC) classification system as A01AD05 (acetylsalicylic acid) medication. The use of aspirin was evaluated based on the defined daily dose (DDD) and the milligrams (mg) accumulated dose consumed by each patient throughout the study period. The DDD is a technical unit of measurement that corresponds to the daily maintenance dose of a drug for its main indication in adults and a given route of administration. The DDDs of active ingredients are established by the WHO and published on the WHO Collaborating Center for Drug Statistics Methodology website [22,23].
Exposure was determined from computerized pharmacy data and consisted of the total DDD dispensed to an individual during the study period. For instance, if a person consumed aspirin for a while, then stopped using it and later started again, the total DDD consumed during the following period was considered. To be considered as exposed to aspirin, the total number of years of consumption had to be ≥5 years. The number of years was based on previous studies, which suggested this period as the minimum for aspirin to have a protective effect [13,24]. To consider exposure to aspirin, the minimum consumed daily was >75 mg [25,26]. The number of DDD calculated this value in mg.
## 2.4. Statistical Analysis
Descriptive analyses were performed to evaluate the association between characteristics at baseline, exposure and outcomes. Patients’ characteristics, risk factors and aspirin exposure were analyzed to determine the association with the risk of CRC. The incidence rate of CRC was calculated to each factor over a specified period. A bivariate analysis was initially used to estimate the crude hazard ratios for the association between aspirin consumption and the risk of incident CRC.
A Cox proportional hazard model was used to determine the HR and the corresponding $95\%$ CI. The models were adjusted by sex, age, aspirin exposure, BMI, risky drinking and smoking. Subsequently, stratified models were calculated by sex.
The probability values for the statistical tests were two-tailed, and a CI that did not contain 1.0 was regarded as statistically significant. Results with wide CIs should be interpreted cautiously. All statistical analyses were performed using R (R Core Team 2019), an open-source programming language and environment for statistical analysis and graphic representation.
## 3. Results
We analyzed 154,715 inhabitants of Lleida aged >50 years, of whom 1276 ($0.8\%$) had CRC between 2012 and 2016. The mean CRC incidence rate and the total cases by sex and age group for the five study years are shown in Figure 2a,b.
The sociodemographic information and aspirin exposure in patients with CRC (Table 2) were analyzed in the bivariate analysis.
We recorded 485 (0.8 × 1000) females and 791 (1.4 × 1000) males (HR = 1.9; $95\%$ CI; 1.6–2.0) with CRC. Most patients were from the 60–69 years (HR = 1.8; $95\%$ CI; 1.6–2.1) and 70–79 years age groups (HR = 2.0; $95\%$ CI; 1.9–2.6). There were 1138 (1.2 × 1000) CRC cases without aspirin consumption and 138 with aspirin consumption (1.0 × 1000) (HR = 0.9; $95\%$ CI; 0.8–1.1). There were 504 (1.2 × 1000) cases with overweight (HR = 2.5; $95\%$ CI; 2.2–3.1) and 603 (1.3 × 1000) with obesity (HR = 2.7; $95\%$ CI; 2.3–3.3), and there were 56 (2.2 × 1000) cases with risky drinking (HR = 2.1; $95\%$ CI; 1.6–2.7), while 220 (2.0 × 1000) were smokers (HR = 2.0; $95\%$ CI; 1.8–2.4).
Cox regression showed variations in the outcomes (Table 3). Sex, age and aspirin exposure were significantly associated with CRC. The adjusted HR (aHR) for males was 1.8 ($95\%$ CI: 1.6–2.1) and 1.8 ($95\%$ CI: 1.6–2.1) in the 60–69 years age group, 2.3 ($95\%$ CI: 1.9–2.7) in the 70–79 years age, 2.2 ($95\%$ CI: 1.8–2.6) in the 80–89 years age group and 0.2 ($95\%$ CI: 0.1–0.3) in the 90 years age group. Aspirin consumption had an aHR of 0.7 ($95\%$ CI: 0.6–0.8). The BMI also was significant. Overweight had an aHR of 1.4 ($95\%$ CI: 1.2–1.7) and obesity of 1.5 ($95\%$ CI: 1.3–1.8). Risky drinking had a significant aHR of 1.6 ($95\%$ CI: 1.2–2.0) and smoking an aHR of 1.4 ($95\%$ CI: 1.3–1.7). Figure 3 represents the adjusted hazard ratios graphically.
HRs were adjusted by gender, age, aspirin use, BMI, risky drinking and smoking.
Table 4 shows the results of the Cox regression stratified by sex. In the case of males, the results were similar to the general table. In this model, aspirin exposure remained significant (aHR: 0.7; $95\%$ CI: 0.6–0.8), as did the BMI, risky drinking and smoking. In females, aspirin use remained significant (aHR: 0.6; $95\%$ CI: 0.4–0.8), but, of the risk factors, only obesity remained significant (aHR: 1.4; $95\%$ CI: 1.2–1.9). Figure 4 represents the adjusted hazard ratios graphically.
HRs were adjusted by age, aspirin use, BMI, risky drinking and smoking.
## 4. Discussion
Our results confirm the negative association between aspirin consumption and CRC independently of the other risk factors measured. Males may be at a higher risk of CRC than females but aspirin may be slightly more protective in females.
Reports support a delayed effect of aspirin on CRC [27]. A meta-analysis by Rothwell et al. examined the long-term effects of aspirin on CRC outcomes using trials of aspirin [28]. Studies on the impact of aspirin in CRC prevention have been published [6,29], although the effects of risk factors and aspirin use together have not yet been analyzed. Therefore, our findings corroborate the research in the field highlighting the protective effect of aspirin and go beyond comparing this positive effect with the negative effects caused by several risk factors.
Several recent studies have suggested an association between aspirin use and some specific cancers. Ciu et al. concluded that high-dose aspirin reduced the risk of pancreatic cancer [30]. Jacobo et al. analyzed studies on the relationship between aspirin and breast cancer [31] and concluded that aspirin consumption reduced the relative risk of breast cancer. Sieros et al. suggested that aspirin reduced the risk of esophageal cancer [32].
We found significant differences according to sex, suggesting that men have a higher risk of developing CRC. It has been reported that men have higher cumulative levels of smoking than women and a higher alcohol intake, which may explain the higher risk [33].
People aged between 60 and 80 years had a higher risk of CRC and the 80–89 years and 90–99 years age groups had a lower risk [34,35]. Older adults may have a differential mechanism compared with younger people. For example, aging is associated with alterations in DNA methylation, which may affect the susceptibility to cancer. The gut microbiota of older people differs from that of younger adults, which may influence drug metabolism and inflammatory processes. Genetics, underreporting and age-related physiological effects could explain the reduced risk [36].
We found some differences with respect to risk factors, such as overweight/obesity, risky drinking and smoking. Overweight represented $39.5\%$ of total CRC cases and obesity $47.3\%$. Therefore, approximately $85\%$ of patients with CRC presented excess weight, suggesting exposure to a poor diet. These results corroborated previous studies [37,38,39,40]. Excess weight is one of the most important risk factors for CRC. Individuals with a higher BMI have higher levels of chronic inflammation, and obesity may act through the gut microbiome on colorectal tumorigenesis and also promotes colorectal cancer in mice. There were notable differences in risky drinking. Patients with risky drinking had a higher risk of CRC (HR = 2.2). Meta-analyses of case–control and cohort studies suggest that high alcohol consumption might be associated with an increased risk of colorectal cancer. The epidemiological evidence has been complemented by molecular evidence on the mechanisms that could explain this association [17,41]. Similar results were obtained for smoking (HR = 2.0). The crude HR obtained also indicated this association between smoking and CRC [42]. Smoking was more closely associated with colorectal tumors that arose from non-conventional pathways, such as the serrated polyps pathway, and smoking was significantly associated with the risk of advanced serrated polyps in a screening population.
The Cox regression included all the remaining model variables, such as risk factors and aspirin exposure for CRC. Sociodemographic variables such as gender and age confirmed the correlation with CRC. Males were 1.8 times more at risk than females. This may be related to men having excess body weight and higher exposure to alcohol and smoking than women [43]. Regarding the age groups, the results confirmed that the 70–79 years age group had the highest risk, which was 2.3 times greater than the 50–59 years (ref. group) and the 69–69 years and 80–89 years age groups. Other studies found similar outcomes on the incidence and association related to CRC [44,45].
The use of aspirin for ≥5 years was significant in the Cox regression. The analysis suggested that aspirin decreased the risk of CRC. The HR was 0.7 ($95\%$ CI: 0.6–0.8), meaning that it reduced the risk of CRC by $30\%$ [46,47]. Studies have found reductions of 20–$30\%$ [46] and $27\%$ [47] in the risk for CRC. The risk factors were correlated with an increased risk of CRC. Overweight and obesity were significantly associated with a CRC risk 1.4 and 1.5 times higher, respectively. Obesity had a higher risk, although the HR was similar [48]. Risky drinking and smoking also had a significant HR. Risky drinking had a 1.6 times higher risk and smoking a 1.4 times higher risk. Other studies also found these associations [49,50], with a 1.3 and 1.2 times higher risk for risky drinking, respectively, and a 1.2 higher risk for smoking [50].
The Cox regression stratified by sex also obtained significant results. Men and women had similar outcomes according to age. The trends were the same as the non-stratified regression. The risk of CRC was higher in people aged 60–89 years in both sexes. The use of aspirin also maintained the association with a reduced CRC risk. Specifically, in females, aspirin could prevent CRC, in the best case, by up to $40\%$. A similar percentage was obtained by Cook et al. [ 51] in a randomized controlled trial, which showed $42\%$ aspirin protection against CRC risk among women. These results corroborated the fact that aspirin reduces the risk of CRC in both sexes [52]. However, the results related to risk factors were significant in males. Overweight and obesity were associated with a 1.5 and 1.6 times higher risk of CRC, respectively [53]. Risky drinking and smoking were also correlated with the CRC risk. The differences between males and females may be that males more often have a poor diet and drink and smoke more than females [54]. In females, only obesity was significantly associated with an increased risk of CRC. Moreover, as a previous study concluded [55], only excess weight among men was significantly associated with increased CRC risk. In addition, the authors suggested that this risk might be reversed in obese men taking aspirin. Similarly, in our study, in the analysis stratified by normal weight and overweight/obesity, aspirin was protective against CRC in both groups, but it was only statistically significant in overweight/obese patients (Supplementary Table S1) [55]. The remaining risk factors were not related to CRC, although the HR was >1 in all of them. Individual susceptibility and the type of exposure may explain these results. Men probably have a different pattern of consumption than women and are more intensely exposed to alcohol and smoking. In addition, it seems that without the effect of aspirin, these factors are related to CRC (Supplementary Table S2).
The preventive effect of aspirin has been attributed to the inhibition of cyclooxygenase (COX), the enzyme responsible for the synthesis of prostaglandins [56,57]. COX-2 is abnormally expressed in many cancer cell lines and is involved in the processes of carcinogenesis, angiogenesis and tumor growth. Additional mechanisms of aspirin include the induction of apoptosis through COX-independent pathways. Future research should also study the role of aspirin metabolites and the role of the intestinal microbiota in cancer prevention against CRC.
Long-term aspirin is prescribed for patients with a high cardiovascular risk of non-focal continuous pain due to arthritis, and the results of this study may support this indication [58,59].
The study has some limitations. Firstly, some patients could buy aspirin directly in pharmacies without a doctor’s prescription, and this consumption is underreported. Second, some patients may not take the medication, even if they have purchased it at the pharmacy, and, in this case, aspirin use will be overreported. Third, although the Population-Based Cancer *Registry is* exhaustive, it cannot be ruled out that some cases were diagnosed in hospitals in other territories and some cases have not been correctly registered. We were unable to study the dose–response relationship between low-dose aspirin and CRC because more than $90\%$ of aspirin use in this study was at a dose of 100 mg/day, which did not allow us to assess the highest related dose effect. Another limitation is the lack of specification of the types of CRC, such as familial polyposis or familiar cancer genetics, as a possible bias. This information was not taken into account in the register. A limitation that must be considered concerns the CRC cases diagnosed before 2012. These cases were not included because the Cancer Registry started registering cases in 2012. However, CRC cases prior to 2012 would not have had the opportunity to be exposed to risk factors or aspirin for a period of 5 or more years and would not have been recorded as incident cases in this study. Despite this, CRC is a type of cancer that can present another primary cancer a few years later; therefore, some CRC cases may be included. moreover, related to the risk factors, some bias was present due to under-reporting, although the percentage of our cases was similar to the prevalence observed in Catalonia. Finally, the impact of these excluded cases was minimal because they were younger than 50, where cancer may be unrelated to risk factors, or they were cases from other regions, and few patients had to be excluded due to a lack of information on risk factors.
The study’s strengths included the fact that data are presented on risk factors, such as excess weight, smoking and risky drinking. The study was performed with information from clinical practice, with physicians unaware of the study objectives, which avoided investigator bias.
## 5. Conclusions
This retrospective study found an association between aspirin use for ≥5 years and a reduced risk of CRC. The protective effect due to aspirin was higher in women. The results also showed an association between the risk of CRC and risk factors such as overweight, obesity, smoking and risky drinking, specifically in men. Moreover, the risk of CRC in women was significantly associated with obesity. The 70–79 and 80–89 age groups had a higher risk of CRC in men and women. Therefore, despite some limitations, such as the lack of information on food or dietary factors or some bias in the aspirin prescriptions, the results are according to the recently published literature.
*In* general, these results reinforce the need for public health messaging about the harmful effects of smoking, alcohol use and excess weight, and the use of aspirin to prevent CRC under prescription. They also encourage continued research into CRC to find new factors or interactions among them associated with this cancer. They also may help the health system to focus on preventing them and recommend the continuous use of aspirin under medical supervision. |
# How Can a Bundled Payment Model Incentivize the Transition from Single-Disease Management to Person-Centred and Integrated Care for Chronic Diseases in the Netherlands?
## Abstract
To stimulate the integration of chronic care across disciplines, the Netherlands has implemented single-disease management programmes (SDMPs) in primary care since 2010; for example, for COPD, type 2 diabetes mellitus, and cardiovascular diseases. These disease-specific chronic care programmes are funded by bundled payments. For chronically ill patients with multimorbidity or with problems in other domains of health, this approach was shown to be less fit for purpose. As a result, we are currently witnessing several initiatives to broaden the scope of these programmes, aiming to provide truly person-centred integrated care (PC-IC). This raises the question if it is possible to design a payment model that would support this transition. We present an alternative payment model that combines a person-centred bundled payment with a shared savings model and pay-for-performance elements. Based on theoretical reasoning and results of previous evaluation studies, we expect the proposed payment model to stimulate integration of person-centred care between primary healthcare providers, secondary healthcare providers, and the social care domain. We also expect it to incentivise cost-conscious provider-behaviour, while safeguarding the quality of care, provided that adequate risk-mitigating actions, such as case-mix adjustment and cost-capping, are taken.
## 1. Introduction
In many countries, the prevalence of chronic diseases, and in particular people with multimorbidity, i.e., two or more chronic diseases, is increasing [1]. Two thirds of people over 45 will develop multimorbidity in their remaining lifetime [2]. To address their needs, many countries are now implementing different models of integrated care [3]. As the Netherlands were among the first countries to do so on a very large scale, there are lessons to be learned for other countries from how this evolved in the Netherlands, in particular with regards to possible incentives for truly person-centred and integrated care.
Historically, the Dutch healthcare system has had a strong primary care sector, in which general practitioners (GPs) act as gatekeepers to secondary care (i.e., patients need a referral by the GP) [4]. To improve the quality of care to people with chronic diseases, single-disease management programmes (SDMPs) have been introduced in Dutch primary care since 2010, for diabetes type-2 (DM2) [5], cardiovascular risk management (CVR) [6], and chronic obstructive pulmonary disease (COPD) [7]. Therefore, the GP is the main caregiver for many patients with chronic diseases in the Netherlands. These SDMPs were based on chronic care standards, which are essentially clinical guidelines for providing high quality, multidisciplinary, integrated care.
To coordinate the implementation of the SDMPs in a region, a new organisational entity, the primary care cooperative (care group), was introduced. Today, there are 130 primary care cooperatives in the Netherlands based on the collaborations of general practices [8]. For the daily execution of the SDMP’s and to reduce the workload of GPs, a new professional role was introduced in the GP practice, namely that of the nurse practitioner. The nurse practitioner regularly monitors symptoms and physiological parameters of patients with the chronic diseases mentioned above, and provides lifestyle and coping advice [9].
To further incentivise the integration of multidisciplinary care, the implementation of the SDMPs was supported by a bundled payment model [10]. The bundled payment covers the costs of coordination, the costs of regular check-ups by the nurse practitioner or the GP, three hours with the dietician for people with DM2, the foot therapist for patients with DM2, the physiotherapist for patients with more severe COPD, and a single (tele)consultation with a medical specialist when necessary. Health insurers contract primary care cooperatives, which in turn subcontract GPs and other healthcare providers for providing the services in the bundle [11]. The fee of the bundled payment results from the negotiation between the primary care cooperative and the health insurer about the content and price of the services in the bundle, which thus varies between primary care cooperatives.
Compared to other countries, the scope of the bundled payment in the *Netherlands is* limited, both in terms of target population and services included in the bundle. For instance, in the United States, accountable care organisations are generally responsible for all healthcare expenditures of a delineated patient population [12,13]. In the Gesundes Kinzigtal programme in Germany, the target population includes a group of 33,000 patients from Baden-Württemberg, who are insured by two public health insurers. Key characteristics focus on prevention, self-management, reduction of polypharmacy, patient-centred care, and shared decision-making. The programme is funded by a capitation-based payment combined with a shared savings model [14,15]. In the United Kingdom (UK), general practices receive a lump sum for all GP-care, some specialist care, and generic medication [16,17]. In the UK, integrated care organisations are introduced to stimulate integration between primary care physicians and specialists. The integrated care organisations are responsible for a case-mix corrected budget per capita [18].
As a result of the introduction of the SDMPs in the Netherlands, the vast majority of patients with DM2, CVR, and COPD are now treated in primary care. The quality of chronic care is monitored by InEeN, a primary care interest organisation, which annually publishes process- and outcome-indicators at the care-group level [19]. These indicators were found to improve over time [20], but the clinical relevance and long-term impact of these improvements are uncertain [14,20,21]. Improvements in the work experience of GPs were also reported [9,14,20].
However, the SDMPs have several limitations. First, the chronic care programmes focus on a single chronic disease, rather than adopting a holistic approach that considers the social context of the chronically ill patient (e.g., family, living environment, financial resources, and the work situation) [21,22]. The programmes mainly aim to improve clinical disease-specific indicators, and there is less attention paid to psychological and social aspects. This does not match well with how our perspective on disease and health has evolved. In the Netherlands, many primary care cooperatives have recently embraced the new concept of so-called positive health (‘health as the ability to adapt and to self-manage, in the face of social, physical, and emotional challenges’) that was introduced in 2011 by Huber et al. [ 23,24]. Second, the scope of the services included in the current bundles is limited. The bundled payment does not cover care that transcends the chronic disease [25,26]. The bundled payment does not include all primary healthcare, no secondary care, no mental health care, and no social services. It might stimulate collaboration between healthcare providers in primary care (e.g., between the GP and the dietician), but less so between the GP and the specialist or between the GP and the social worker.
The introduction of the SDMP and the bundled payments were expected to improve the efficiency of care delivery and reduce healthcare expenditures or the growth thereof [27]. However, there is evidence that they increased the total costs of healthcare, especially in patients with multimorbidity [14,28]. This cost increase probably results from a combination of the detection of unmet needs in patients with multimorbidity, double declarations, and an incentive to refer the more complex patients to secondary care to avoid costs exceeding the bundled payment [28]. The currently used SDMPs and bundled payments are not suitable for patients with multiple chronic diseases.
As a result, we are currently witnessing several initiatives to broaden the scope of the SDMPs aiming to provide person-centred and integrated care (PC-IC) [28,29]. This raises the question of which payment model would best support this transition [29]. As a first step, InEeN proposed merging the current bundled payments for people with multiple of the respective chronic diseases to remove duplication [30]. However, that proposal would still not fully incentivise PC-IC. This paper aims to present an alternative payment model that incentivises the integrated nature of a PC-IC programme for people with chronic diseases. It is based on a targeted literature review of (incentives in) traditional and more recent payment models in different countries and inspired by a specific PC-IC initiative in the Netherlands.
## 2.1. Case Example: OPTIMA FORMA
The proposed payment model was specifically designed to match with one of the initiatives to move towards PC-IC in the Netherlands, i.e., the project, OPTIMA FORMA—*Towards a* patient-centred multimorbidity approach for chronic disease management in primary care. In this project, healthcare providers, patients, GP experts with a special interest in DM2, COPD, or CVD, primary care cooperatives coordinators, and researchers developed a new integrated care programme that goes beyond the disease-specific clinical domain. The new care plan has a quadruple aim: [1] enhancing patient experience, [2] improving population health, [3] reducing costs, and [4] improving the work life of health care providers [31].
In the PC-IC programme, a holistic assessment of the health status is performed, personal goals are set, and interventions to achieve these goals are put in place [32,33]. The first step in this programme is assessing the integral health status of the patient (health across multiple domains—Figure 1), using a (preferably digital) questionnaire at home and physical measurements (i.e., blood pressure, weight, and glucose levels). The second step is an appointment in which the results are discussed with the patient in a semi-structured way. The case manager asks if the patient recognizes himself in the results of the assessment, if there are other issues that the patient would like to discuss, and the priorities of the patient. Personal goals are formulated in the third step, which can range from purely medical goals to social goals. In the fourth step, the healthcare provider and patient will together choose the right interventions to achieve these goals, based on the experience of the healthcare provider, the ideas of the patient, and a list of regional options. Different methods can be used to achieve these goals (i.e., through self-management, with e-health, with coaching from a non-medical care provider, with coaching from a healthcare provider within the GP practice, or with coaching from a healthcare provider outside the GP practice). The goals and interventions are documented in a personal healthcare plan, which is preferably digitally available to all relevant healthcare providers and the patient. Then, referrals are made if necessary, and the treatment is started. An evaluation is planned and carried out, if necessary multiple times. If a treatment goal is reached or another treatment goal is more urgent, the cycle can be repeated. The development of this PC-IC approach is described elsewhere in this issue [34].
## 2.2. Incentives in Payment Models
To design a payment model that would match the PC-IC programme of OPTIMA FORMA, we first studied the incentives for providers and other stakeholders that are present in the current Dutch healthcare system for all types of healthcare services used by patients with chronic disease. We classified these payment models according to the typology of Quinn [2015] [35] and identified the incentives related to these payment methods. Quinn [2015] [35] classifies eight basic payment methods in health care: [1] Per time period (budget/salary), [2] Per beneficiary (capitation), [3] Per recipient (contact capitation), [4] Per episode (case rates/per stay/bundled payments), [5] Per day (per diem/per visit), [6] Per service (fee for service (FFS)), [7] Per dollar of costs (cost reimbursement), and [8] Per dollar of charges (percentage of charges).
Secondly, we studied incentives for stakeholders in innovative payment models. These innovative payment models were identified through the alternative payment model (APM) framework described by the Health Care Payment Learning and Action Network (HCP-LAN) [36]. The identified alternative payment models were: [1] pay for performance, [2] shared savings models, and [3] (sub)population-based bundled payment. We combined elements of these models to design an alternative payment model to stimulate PC-IC care for people with chronic diseases.
## 2.3. Design of an Alternative Payment Model
In the next step, we selected three alternative payment models and explicitly focused on the distinctive elements in their design. Since we aimed to propose an alternative payment model for the Dutch setting, the selection was based on two criteria, namely comprehensiveness and origin in the Dutch setting. The selection included:a population-based bundled payment model with an explicit incentive for quality of care of Cattel and Eijkenaar [37].a shared savings model of Hayen et al. [ 38].the alternative payment model of Steenhuis et al. [ 39].
We combined the design elements and design choices that were mentioned by these models into Table 1. Table 1 was used to guide the design of an alternative payment model that would fit the PC-IC programme OPTIMA FORMA. The design choices made were primarily informed by theory on provider-incentives and results from previous evaluation studies of the identified innovative payment models: [1] pay-for-performance [40,41], [2] shared-savings models [13,15,42,43], and [3] (sub)population—based bundled payments [37,44,45].
## 2.4. Expected Impact on Integration of Care
In the last step, we projected the expected impact of the innovative payment model on the integration of care, using the spider-web linked to the typology of Stokes et al [46]. This typology classifies the level of integrated care on eight domains: [1] Target population, [2] Time, [3] Sectors, [4] Provider coverage, [5] Financial pooling/sharing, [6] Income, [7] Multiple disease/needs focus, and [8] Quality measurements [46]. The higher the number, the higher the level of integration (1 = integration is poorly stimulated, 2 = integration is mediately stimulated, and 3 = integration is highly stimulated).
## 3.1. Incentives Induced by Different Payment Models
In Table 2, we provide a summary of current and alternative payment models to fund care for patients with chronic diseases in the Netherlands.
Table 2 provides insight into the incentives induced by each payment model. None of the presented payment models above fully incentivises PC-IC. The SDMPs are currently funded by a fixed annual fee, which is paid in three monthly instalments (chronic care episode). The bundle primarily includes the GP, practice nurses, and a few paramedics working in the primary care sector. Hence, it stimulates collaboration between these service providers, but not beyond. It is likely to improve the quality and efficiency in primary care, but it also creates an incentive for adverse selection and referral of complex patients to secondary care. This is present, even though the fixed fee is based on a weighted average of resources used by patients with different severities. It also stimulates so-called ‘over-bundling’, referring to the incentive to enrol more patients than necessary. These undesired incentives can be mitigated by carefully combining elements of different payment models [37]. From a theoretical perspective, a bundled payment with a broader scope in terms of target population and services, in combination with a shared savings model and a pay-for-performance model seems promising [37].
## 3.2. Proposed Payment Model for Person-Centred and Integrated Care
Figure 2 shows the proposed payment model for all patients with one or more chronic conditions, starting with those that are currently included in the existing bundles for DM2, CVR, and COPD. The patient population is delineated by diagnosed chronic disease (at least DM2, CVR, or COPD), insurance (the patient has to be insured at one of the participating health insurers), and GP-practice (GP-practice has to collaborate with one of the participating primary care cooperatives). The payment model consists of three parts: [1] a person-centred bundled payment, [2] a shared savings model that pertains to all healthcare costs, and [3] a pay-for-performance part.
Part one is a person-centred bundled payment that will be prospectively paid to the primary care cooperatives. For each patient, a personal healthcare plan is designed within the OPTIMA FORMA project (Figure 1). The services that can be included in the personal healthcare plan are shown in Figure 3. The bundled payment is based on the weighted average sum of all included services. The weighting is based on the number of patients that use a service and the costs of the service. The primary care cooperative is responsible for the coordination, organization, and financing of all subcontracted participating providers since the primary care cooperative is the main contractor.
Part two is a virtual budget that contains all expected (healthcare) costs of these patients (the contracted bundled payment and the contracted expenditures outside the bundled payment). The case-mix adjusted weighted virtual budget will be compared to the realised expenditures to estimate the savings or losses. It is important to cap the expenditures, so the primary care cooperative does not bear the risk for patients with extreme high (unexpected) expenditures. One could start with a one-sided shared savings model, meaning that only the savings and not the losses will be shared between the health insurer and the primary care cooperative in the region, to mitigate risks for the primary care cooperative and avoid adverse behaviour. The savings will be distributed in a prespecified ratio between the primary care cooperative and the health insurer.
In part three, the prespecified ratio to share the savings depends on the quality of the delivered care. This pay-for-performance part depends on the measured performance of the monitored quality indicators. It is important to avoid time-consuming checklists and process indicators and adopt a small set of key outcome indicators. This requires trust from the health insurers and leads to more flexibility for providers to only provide services that are applicable for a patient instead of ticking boxes to show that they followed the correct process. Quality indicators are measured at primary care cooperative level.
More details are provided in Appendix A.
The contract between the insurer and the primary care cooperatives should be signed for multiple years, preferably for three to five years. This provides the opportunity to explore the potentials of the alternative payment model (stimulate integration of care, improve quality of care, and reduce overall healthcare costs) and to gain mutual trust between the different stakeholders [39,43,75]. When the contract is renewed, changes can be made accordingly. For instance, after three or five years the one-sided shared savings model could be transformed into a two-sided shared savings model (the primary care cooperative also shares in the potential losses). A two-sided shared savings model stimulates cost-conscious behaviour better, but also increases the financial risk for the primary care cooperative [12,13,76].
## 3.3. Consequences of the Proposed Payment Model
The suggested alternative payment model is expected to be associated with incentives presented in Table 3. Each of the three parts of the proposed payment model has desirable and undesirable consequences, and the latter can be mitigated by the other part(s).
As the range of services that can be included in the individual care plan (Figure 3) is much wider than in the current bundle for SDMP, the person-centred bundled payment is expected to stimulate the holistic approach that is aimed for by the PC-IC programme. The primary care cooperative and its associated care providers will have an incentive to improve efficiency by better coordination and collaboration because the budget extends over a wider range of services. This increases mutual responsibility. One of the perverse incentives of a bundled payment that may not cover the full care path of a patient, is that patients are referred to services outside the bundle [50]. The shared savings model mitigates this perverse incentive because the comparison of the actual and the expected expenditure (i.e., the virtual budget) pertains to the total healthcare expenditure. This could result in cost-conscious behaviour [38,77]. The current bundles for SDMPs do not incorporate a shared savings model. If the shared savings model stimulates cost savings through increased efforts to slow down the progression of disease and prevent acute hospital admissions, it also improves health outcomes. However, to mitigate financial risks for the primary care cooperative, a one-sided shared savings model is preferred over a two-sided shared savings model to avoid adverse behaviour of the primary care cooperative, especially at the beginning [78]. A perverse incentive of the person-centred bundled payment model and a shared savings model is cutting costs on necessary care. The pay-for-performance part of the model aims to reduce this risk by stimulating a high quality of care.
Like all payment methods, this alternative payment model still induces some undesirable consequences which are hard to eliminate by one of the three parts of the payment model. The risk of reducing costs by cutting necessary care might be there to some extent. Furthermore, the threshold can be lowered to include patients in the person-centred bundled payment for whom one may expect little cost. However, adequate case-mix adjustment and capping costs could reduce these risks. To some extent, the person-centred bundled payment also reduces the choice of the patient because certain care providers are contracted, and others might not. As the personal healthcare plan is based on the needs, capabilities, and wishes of the patients, it is important that the contracted providers are able to provide the services shown in Figure 3 [39]. Another undesired consequence is that the primary care cooperative bears too much risk because all expenditures are included in the virtual budget. The primary care cooperative might not be able to control all of these expenditures. The incentives of providers outside the person-centred bundled payment are not well aligned because these physicians are mostly paid FFS. It is important that these providers feel motivated to collaborate. This might be achieved by investing part of the savings in joint quality improvement and innovation plans, which are attractive to these providers as well. Every pay-for-performance model introduces a risk for gaming behaviour, but the size of that risk depends on the proportion of a provider’s income that comes from the quality-payment. The challenge is to strike a balance between a sufficiently large proportion to incentivize quality improvement and a sufficiently small proportion to avoid gaming [77].
## 3.4. Impact on Integration
Figure 4 shows the degree of integration of the proposed payment model and the currently used bundled payments for the SDMPs on the eight dimensions of the framework based on Stokes et al [46]. Table 4 explains the levels that were expected for each domain.
## 4. Discussion
The aim of this paper was to design a bundled payment model that incentivises the transition from single-disease management to PC-IC for patients with chronic diseases. Based on a targeted literature review, we identified the incentives which are (theoretically) generated by the eight basic payment methods classified by Quinn [35] and the alternative payment models identified through the APM framework [36]. Based on the identified incentives, we designed an alternative payment model for PC-IC that consists of three main elements, i.e., [1] a person-centred bundled payment, [2] shared savings, and [3] pay-for-performance. The combination of these elements is expected to provide well-aligned, desired incentives towards multi-disciplinary collaboration to meet a patient’s needs, capabilities, and preferences. Each element is necessary to mitigate the undesired incentives of other elements. Furthermore, adequate risk-adjustment and cost-capping are prerequisites to mitigate large risks for providers and to mitigate adverse behaviour.
The implementation of this alternative payment model comes with certain challenges. The first challenge pertains to the investment of resources needed for implementation, which mainly include financial investments (e.g., transition costs to the alternative payment model) and time investments (e.g., to expand collaborations) [39]. To manage the alternative payment model, the software in place should be adapted to monitor the costs and quality of care over time [39]. Administrative costs of monitoring quality of care and negotiating about the conditions of the contract may increase, but this may be offset by a reduction in administrative costs when the services no longer have to be separately claimed [39].
The second challenge is to define the patient population that will be included in the person-centred bundled payment. The population of patients with DM2, CVR, and/or COPD is very heterogenous in terms of patient-characteristics, disease-severity, and co-existing morbidity patterns. For an adequate estimation of the expected expenditures, necessary to determine the savings or losses, clear inclusion and exclusion criteria need to be defined.
The third challenge is to estimate an appropriate budget for the person-centred bundle. The budget will be estimated by a weighted sum of the costs of all (health)care modules provided in the bundle. The weighting will be carried out by predicting the number of patients that would use the various modules. As time after implementation progresses, figures regarding the relative use of the modules will become more reliable. Specifically, for OPTIMA FORMA, a clinical and economic evaluation study is planned that will provide the first estimates of the utilization of specific services. Micro-costing studies are necessary to determine the costs per module.
For an appropriate comparison of expected and actual expenditures and to avoid extreme savings or catastrophic losses [79], adequate adjustment for differences in case-mix is important. Many countries with a multiple payer system (e.g., multiple social health insurers) like in the Netherlands, apply some form of risk equalization to distribute [part of] the budget among the payers. Whether variables included in the risk equalization formula of the health insurance system can also be used to adjust for differences in the case-mix of providers remains to be investigated. It is obvious that variables that are influenced by the PC-IC programme cannot be used in the case-mix adjustment because that would diminish/eliminate the estimated effects [38].
Another challenge when designing the alternative payment model is to determine the quality indicators for the pay-for-performance part of the alternative payment model. It is important to select quality indicators that are sensitive to improvements by the PC-IC programme and the alternative payment model. Based on a systematic literature review, specific design features that contribute to the desired effect of pay-for-performance are: [1] using outcome measures that are very specific and easy to track; [2] targeting individuals or small teams; [3] using absolute rather than relative targets; [4] frequently paying with little delay after delivery; and [5] involving providers from the start in the design [40]. Primary care cooperatives are reluctant to accept financial responsibility for indicators they cannot influence [80]. Conceptually, one would like to have one or more indicators for each of the four aims of PC-IC, but the challenge is to find the right balance between registration burden [79] and information need.
To increase the chances of the successful implementation of PC-IC, several requirements need to be met. In their paper on the successful implementation of integrated care for people with multimorbidity, Looman et al [29] stressed the importance of ten mechanisms, of which one is securing long-term funding and adopting an innovative payment model that overcomes fragmentation. However, most important is constructive alignment, meaning that simultaneous measures at the micro, meso, and macro levels are needed to support the implementation of PC-IC [29]. With respect to the payment model, this implies that the incentives for all participating healthcare providers, as well as with existing financial streams, have to be aligned [39,80].
A more fundamental question that arises is whether a population-based payment model that would extend to the entire population in a geographically defined area (e.g., a region) and all care providers within that area would not be a more appropriate alternative compared to the alternative payment model proposed here. Especially, because that would stimulate prevention of disease and network care for the entire population in the catchment area, all of which is paid for from one bundled budget [50]. On one hand, it could fit the integrated nature of the PC-IC programme, but on the other hand, the step from the currently used bundled payments to a population-based payment might be too big. As it currently stands, the PC-IC programme OPTIMA FORMA focusses on people with the mentioned chronic diseases. If the population of interest were defined as the entire population of insured people in a region, the effect of the PC-IC programme could easily be diluted. That does not alter the fact the PC-IC programmes would benefit from economies of scale, which could reduce the financial risks for primary care cooperatives.
## 5. Conclusions
To conclude, we designed a payment model with well-aligned incentives to support the adoption of PC-IC. This model consists of: [1] a person-centred bundled payment; [2] a shared savings model; and [3] a pay-for-performance part in which the sharing ratio between insurer and provider is conditional on the performance of the provider. This alternative model is likely to be an adequate alternative for the relatively limited bundled payment model that is currently used to fund the SDMPs in the Netherlands. |
# Systemic Lupus Erythematosus and Risk of Dry Eye Disease and Corneal Surface Damage: A Population-Based Cohort Study
## Abstract
Systemic lupus erythematosus (SLE) potentially involves multiple parts of the ocular system, including the lacrimal glands and the cornea. The present study sought to assess the risk of aqueous-deficient dry eye disease (DED) and corneal surface damage in patients with SLE. We conducted a population-based cohort study using Taiwan’s National Health Insurance research database to compare the risks of DED and corneal surface damage between subjects with and without SLE. Proportional hazard regression analyses were used to calculate the adjusted hazard ratio (aHR) and $95\%$ confidence interval (CI) for the study outcomes. The propensity score matching procedure generated 5083 matched pairs with 78,817 person-years of follow-up for analyses. The incidence of DED was 31.90 and 7.66 per 1000 person-years in patients with and without SLE, respectively. After adjusting for covariates, SLE was significantly associated with DED (aHR: 3.30, $95\%$ CI: 2.88–3.78, $p \leq 0.0001$) and secondary Sjögren’s syndrome (aHR: 9.03, $95\%$ CI: 6.86–11.88, $p \leq 0.0001$). Subgroup analyses demonstrated that the increased risk of DED was augmented among patients with age < 65 years and female sex. In addition, patients with SLE had a higher risk of corneal surface damage (aHR: 1.81, $95\%$ CI: 1.35–2.41, $p \leq 0.0001$) compared to control subjects, including recurrent corneal erosion (aHR: 2.98, $95\%$ CI: 1.63–5.46, $$p \leq 0.0004$$) and corneal scar (aHR: 2.23, $95\%$ CI: 1.08–4.61, $$p \leq 0.0302$$). In this 12-year nationwide cohort study, we found that SLE was associated with increased risks of DED and corneal surface damage. Regular ophthalmology surveillance should be considered to prevent sight-threatening sequelae among patients with SLE.
## 1. Introduction
Dry eye disease (DED) is a multifactorial disorder that is characterized by the disruption of tear film homeostasis [1]. Tear film instability and hyperosmolarity, ocular surface inflammation and damage, and neurosensory abnormalities play etiological roles in developing ocular symptoms, including punctate epithelial keratitis, filamentary keratitis, superior limbic keratoconjunctivitis, lid parallel conjunctival folds, and lid wiper epitheliopathy [2]. The prevalence of DED varies globally, ranging from 5 to $50\%$ across different countries and regions [3]. In Taiwan, the prevalence rate of DED was reported to be $5\%$ to $34\%$, with females and the elderly in the majority [4,5,6,7]. It should be noted that patients with DED have a significantly higher risk of corneal surface damage due to progressive ocular surface inflammation and disruption [3]. Recurrent corneal erosion, corneal ulcers, and corneal scars represent common findings among patients with severe corneal surface damage. Previous studies have revealed several risk factors for DED-associated corneal surface damage, including younger age, female sex, diabetes mellitus, and autoimmune diseases (e.g., rheumatoid arthritis) [8]. Importantly, DED symptoms have an adverse impact on patients’ visual functions, daily activities, work productivity, and vision-related quality of life [9].
Systemic lupus erythematosus (SLE) is a chronic, complex, and multifaceted autoimmune disorder, while its etiology remains largely unclear [10]. SLE predominantly affects females, especially in their 20 s and 30 s [10]. The prevalence of SLE varies across different countries [11]. In Taiwan, the prevalence rate was reported to be 97.5 per 100,000 population [12]. Approximately one-third of SLE patients suffer from ocular involvement, of which keratoconjunctivitis sicca represents the most common manifestation [13,14,15,16]. In a previous report, the risks of DED, cataracts, and glaucoma were significantly higher in patients with SLE [17]. However, there is limited population-based data demonstrating the association between SLE and DED or serious corneal surface damage. The relationship between SLE and DED is not completely clarified due to some methodological drawbacks of previous studies, including small sample size (n < 1000) [14,16,17], insufficient adjustment for confounders [14,16,17], and restriction to single institutions [14,16] or specific populations (children) [14]. In addition, few studies have evaluated the potential impact of SLE on the development of corneal surface damage, and relevant risk factors remain largely unknown [14,16,17]. In this population-based cohort study, we used Taiwan’s National Health Insurance (NHI) research database to evaluate the temporal relationship between SLE and DED or corneal surface damage. Based on the current literature [13,14,15,16,17], we hypothesized that SLE was associated with both DED and corneal surface damage in this 12-year nationwide cohort.
## 2.1. Data Source
This study obtained ethical approval from the Taipei Medical University-Joint Institutional Review Board (approval no. TMU-JIRB-N202210011; date of approval on 6 October 2022). Written informed consent was waived by the Institutional Review Board due to the retrospective nature of this research. All methods were performed following the Declaration of Helsinki 2013 and relevant study guidelines [18]. Taiwan’s National Health Insurance program was launched in March 1995 and offered insurance to more than $99\%$ of 23.3 million Taiwanese residents. The NHI research database contains comprehensive claims data of the insured beneficiaries, including demographic characteristics (e.g., date of birth and sex), medical diagnoses, prescription drugs, and medical expenditures. The NHI research database has been widely used for public health statistics and risk assessment [19,20,21]. In the present study, we included subjects from the three Longitudinal Health Insurance Databases (LHID2000, LHID2005, and LHID2010), which contains original claims data of 1 million randomly sampled beneficiaries from the original NHI research database in the years 2000, 2005, and 2010, respectively [22].
## 2.2. Inclusion and Exclusion Criteria
Patients who had at least 2 rheumatology clinic visits with the diagnoses of SLE between 1 January 2002 and 30 June 2013 were included consecutively. We utilized the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) codes to ascertain the diagnoses of SLE, coexisting diseases, and ocular disorders (Supplementary Table S1). The index date was defined as the date of the first SLE diagnosis. Patients were excluded due to the following conditions: any previous diagnoses of DED, corneal ulcers, recurrent corneal erosion, corneal scars, interstitial and deep keratitis, corneal neovascularization, ocular burns, or open globe injury in the ophthalmology service before the index date. Subjects were also excluded if they had been prescribed eye lubricants before the index date or died in the follow-up period.
## 2.3. Outcome Assessment
The primary outcome was DED, which was defined as the diagnosis made twice by certified ophthalmologists with the prescriptions of cyclosporine ophthalmic emulsion in the ophthalmology care service (Supplementary Table S1). In the reimbursement regulations of Taiwan’s National Health Insurance, cyclosporine ophthalmic emulsion treatment can be used when patients’ Schirmer test scores are less than 5 mm in 5 min [5,8]. The secondary outcomes included secondary Sjögren’s syndrome (SS) and severe forms of corneal surface damage, which were defined as any diagnosis of corneal ulcers, recurrent corneal erosion, or corneal scars made twice by certified ophthalmologists.
## 2.4. Covariates for Model Adjustment
Insurance premium was classified into $0–$500, $501–$800, and >$800 United States dollars per month. The ICD-9-CM codes of physicians’ diagnoses within 24 months before the index date were employed to determine the following comorbidities, chosen based on data availability and existing literature: hypertension, diabetes mellitus, coronary artery disease, chronic obstructive pulmonary disease, chronic liver disease, chronic kidney disease, cerebrovascular disease, thyroid disease, major depressive disorder, anxiety disorder, sleeping disorder, and cancer (Supplementary Table S1) [23]. The Charlson comorbidity index score was calculated to evaluate the comorbidity level of included subjects [24]. We also evaluated the concurrent prescription of systemic corticosteroids within 6 months after the index date. The numbers of hospitalizations and emergency visits within 24 months before the index date were analyzed to assess the level of medical resource utilization of the studied patients.
## 2.5. Statistical Analysis
A non-parsimonious multivariable logistic regression model was used to calculate a propensity score for SLE and non-SLE subjects. Each SLE subject was matched to a non-SLE control using the nearest neighbor matching algorithm within a tolerance limit of 0.05 and without replacement to balance the distributions of age, sex, and monthly insurance premium between the two groups [25]. Baseline patient characteristics were compared between matched pairs using the absolute standardized mean difference [26]. We used multivariable Cox proportional hazards regression models to calculate the adjusted hazard ratio (aHR) and $95\%$ confidence interval (CI) for the study outcomes. The multivariable models adjusted for the variables of age, sex, monthly insurance premium, coexisting diseases, Charlson comorbidity index score, use of systemic corticosteroids, number of hospitalizations, and number of emergency room visits. The Kaplan-Meier curves and log-rank tests were used to compare the cumulative incidence of ophthalmological outcomes between the two groups. Stratified analyses were also conducted by age≥ or <65 years, male or female, different Charlson comorbidity index scores, and use of systemic corticosteroids or not to examine the risk of DED within these strata. A two-sided p-value of <0.05 was considered statistically significant. All the statistical analyses were conducted using Statistics Analysis System (SAS), Version 9.4 (SAS Institute Inc., Cary, NC, USA).
## 3.1. Baseline Patient Characteristics
The matching procedure generated 5083 matched pairs with 78,817 person-years of follow-up for analyses (Supplementary Figure S1). The baseline distributions of demographic and patient characteristics are shown in Table 1. Noticeably, patients with SLE were more likely to have more comorbidities, higher Charlson comorbidity index scores, prescriptions of systemic corticosteroids, and greater numbers of hospitalizations and emergency room visits.
## 3.2. Dry Eye Disease
The incidence of DED was 31.90 and 7.66 per 1000 person-years in the SLE and non-SLE groups, respectively (Table 2). The interval between enrollment and DED diagnosis was median 2.6 (interquartile range: 0.6–5.6) years in the SLE patients and 5.1 (2.3–7.5) years in the non-SLE controls ($p \leq 0.0001$). The results of univariate and multivariable proportional hazards regression analyses for DED were shown in Table 3. After adjusting for covariates, SLE was significantly associated with increased DED compared to non-SLE controls (aHR: 3.30, $95\%$ CI: 2.88–3.78, $p \leq 0.0001$). Figure 1A demonstrates the cumulative incidence of DED in the two groups. SLE was also linked to secondary SS (aHR: 9.03, $95\%$ CI: 6.86–11.88, $p \leq 0.0001$). Other independent factors for DED were age (aHR: 1.03), female sex (aHR: 2.56), monthly insurance premium (501–800 vs. 0–500 USD, aHR: 0.92; ≥801 vs. 0–500 USD, aHR: 1.20), hypertension (aHR: 0.78), cerebrovascular disease (aHR: 0.68), sleeping disorder (aHR: 1.24), Charlson comorbidity index (1 vs. 0, aHR: 1.45; 2 vs. 0, aHR: 1.35; ≥3 vs. 0, aHR: 0.74), and use of systemic corticosteroids (aHR: 1.41). Subgroup analyses showed that the aHR for DED was higher in patients with age < 65 years (aHR: 3.48) and female sex (aHR: 3.47) compared to those with age ≥ 65 years (aHR: 1.99) and male sex (aHR: 2.20), respectively (Table 4).
## 3.3. Corneal Surface Damage
The incidence of corneal surface damage was 3.93 and 2.12 per 1000 person-years in the SLE and non-SLE groups, respectively (Table 2). The time to corneal surface damage was median 4.3 years (interquartile range: 1.7–7.8) in the SLE patients and 3.8 years (interquartile range: 2.0–7.4) in the non-SLE controls ($$p \leq 0.9610$$). The multivariable model showed that SLE was significantly associated with increased corneal surface damage (aHR: 1.81, $95\%$ CI: 1.35–2.41, $p \leq 0.0001$; Table 5 and Figure 1B). Further analyses showed that SLE was significantly associated with higher risks of recurrent corneal erosion (aHR: 2.98, $95\%$ CI: 1.63–5.46, $$p \leq 0.0004$$) and corneal scar (aHR: 2.23, $95\%$ CI: 1.08–4.61, $$p \leq 0.0302$$). Another independent factor for corneal surface damage was female sex (aHR: 1.94, $95\%$ CI: 1.28–2.94, $$p \leq 0.0017$$).
## 4. Discussion
The present study demonstrated that patients with SLE exhibited significantly greater risks of DED and corneal surface damage, especially for recurrent corneal erosion compared with age, sex and insurance premium-matched controls. Subgroup analyses further revealed that the higher SLE-associated risk of DED was observed in subjects of males and females, age ≥ 65 and <65 years old, and those with or without systemic corticosteroid treatment. Considering the devastating impact of DED and corneal surface damage on visual functions, patients with SLE should be alerted on these corneal disorders.
SLE is the third most common autoimmune disorder in Taiwan [27]. The prevalence rate of SLE remarkably increased during the 21st century [28]. Although the ocular symptoms are not included into the 11 diagnostic criteria of SLE, they are not uncommon and about one-third of patients are suffered [29]. Keratoconjunctivitis sicca is the most ocular manifestation of SLE [30], while all the parts of eye, including sclera, uvea, retina, and optic nerve, are possibly involved [31]. There are several reasons accounting for the association between SLE and DED. One is the comorbidity of Sjögren’s syndrome, which causes the reduction of tear. On the other hand, the infiltration of immune cells and immune complex into the epithelial basement membrane are also evident [32], and the increase in proinflammatory cytokine, i.e., interleukin-17, is detected in the tear film of SLE patients [33,34]. The present study also found a significant association between SLE with secondary SS in Taiwanese patients. However, the adjusted risk of DED was similar between SLE patients with and without systemic corticosteroid compared with controls, which may indicate the consequence of an ocular-specific inflammatory response in SLE patients, and topical immunosuppressants are more suitable for the management of DED [35,36].
Corneal ulcer is defined as the lesion of the corneal epithelium, which is a major threat of vision [37]. Without the proper treatment, patients may only rely on the corneal transplant for regaining their vision [38]. Most of the corneal ulcers result from the infection, including bacteria, virus, fungus, and protozoa [38]. On the other hand, non-infectious corneal ulcers, usually present as peripheral ulcerative keratitis (PUK), are highly associated with autoimmune diseases [39]. The fibrocyte and macrophage infiltration in the corneal matrix triggers the inflammatory response, and the accumulation of immune complex is found in the capillary network of cornea in PUK [40,41]. In the present study, the overall risk of corneal surface damage was significantly higher in patients with SLE. Despite that there was only a trend toward increased corneal ulcers in SLE patients, referring to recurrent corneal erosion and corneal scar, more severe types of corneal damage, there were significantly increased risks in SLE patients. The lack of significance in the corneal ulcer among SLE patients may result from other types of PUK, such as infectious or contact lens-related keratitis.
The strength of the present study was the delineation of the association between SLE and DED or corneal surface damage. Meanwhile, the population-based study provided reliable epidemiological evidence and good generalizability about the risk assessment. DED causes substantial discomfort for the SLE patients, and the management of these ocular manifestations of SLE should be emphasized. In addition, despite that SLE is not the major contributor of PUK, it did increase the risk of recurrent corneal ulcer and corneal scar, which may exert a devastating effect on eyesight. However, there were some limitations to the present study. First, since the NHI research database was diagnosis and treatment-based, the laboratory data was unavailable. Therefore, the severity and progression of SLE, DED and corneal surface damage could not be further evaluated. Second, the lack of some social habit information (e.g., alcohol and tobacco consumption) and physical examination data (e.g., body mass index, blood pressure, and visual acuity) might also introduce a bias to the analytical results. Third, we only matched the variables of age, sex and monthly insurance premium between the two groups in the propensity-score matching process in order to increase the sample size and statistical power of matched datasets. Given the fact that the incidence of corneal surface injury was relatively low (approximately $2\%$ to $5\%$ in the 12-year follow-up), a large patient sample is essential to detect a potential risk difference between SLE and non-SLE populations. Fourth, because the use of corticosteroids is a known risk factor for DED and corneal surface damage, the imbalance in the distribution of corticosteroid prescriptions might bias the study results. Further studies are needed to evaluate the potential impact of corticosteroids and immunosuppressants on corneal diseases among SLE patients. Finally, our cohort was only followed up until the 31 December 2013, due to the regulations of the NHI research database.
## 5. Conclusions
The present study demonstrated a higher risk of DED and severe forms of corneal surface damage in patients with SLE. Considering the increasing prevalence of SLE, the vision issues, which affect the quality of life substantially, should be empathized with the rheumatologists and ophthalmologists. Prophylactic and therapeutic management should be further developed for this susceptible population. |
# Duration and Influencing Factors of Postoperative Urinary Incontinence after Robot-Assisted Radical Prostatectomy in a Japanese Community Hospital: A Single-Center Retrospective Cohort Study
## Abstract
Objectives: Post-operative urinary incontinence (PUI) after robotic-assisted radical prostatectomy (RARP) is an important complication; PUI occurs immediately after postoperative urethral catheter removal, and, although approximately $90\%$ of patients improve within one year after surgery, it can significantly worsen their quality of life. However, information is lacking on its nature in community hospital settings, particularly in Asian countries. The purposes of this study were to investigate the time required to recover from PUI after RARP and to identify its associated factors in a Japanese community hospital. Methods: Data were extracted from the medical records of 214 men with prostate cancer who underwent RARP from 2019 to 2021. We then calculated the number of days elapsed from the surgery to the initial outpatient visit confirming PUI recovery among the patients. We estimated the PUI recovery rate using the Kaplan–Meier product limit method and evaluated associated factors using the multivariable Cox proportional hazards model. Results: The PUI recovery rates were $5.7\%$, $23.4\%$, $64.6\%$, and $93.3\%$ at 30, 90, 180, and 365 days following RARP, respectively. After an adjustment, those with preoperative urinary incontinence experienced significantly slower PUI recovery than their counterparts, while those with bilateral nerve sparing experienced recovery significantly sooner than those with no nerve sparing. Conclusion: Most PUI improved within one year, but a proportion of those experiencing recovery before 90 days was smaller than previously reported.
## 1. Introduction
Prostate cancer is the third most common cancer globally, with 1,414,259 diagnosed cases in 2020 [1]. Among various treatment methods for prostate cancer, the main treatment measure has been surgery, namely radical prostatectomy. Robot-assisted radical prostatectomy (RARP) in particular has become the standard procedure, accounting for the majority of cases undergoing radical prostatectomy [2].
One of the most important postoperative complications of radical prostatectomy is PUI, which transiently but surely jeopardizes the postoperative quality of life (QOL) of patients [3,4]. While PUI is reportedly milder with RARP compared with open and laparoscopic radical prostatectomy, it is still an important complication that would exacerbate the QOL of prostate cancer patients; thereby, its management holds significant clinical implications [5].
PUI after radical prostatectomy occurs immediately after removal of the urethral catheter post-surgery. Recovery occurs over time, with approximately $90\%$ of patients improving within 1 year after surgery [6]. It has been reported that various preoperative factors are associated with PUI, such as old age, high obesity, the presence of complications, preoperative erectile dysfunction, a short membranous urethral length, urethral volume, urethral morphology, and bladder factors including the presence of preoperative voiding muscle overactivity and poor bladder compliance [7,8,9,10]. For surgical methods, urethral sphincter-sparing and nerve-sparing as well as newly invented hood techniques are effective in preventing PUI [11]. In addition, it has been indicated that pelvic floor muscle exercises performed preoperatively stimulate early recovery from PUI [12].
In Japan, the number of prostate cancer diagnoses has been on the rise as the aging population increases. In 2018, prostate cancer had the highest number of patients among males, with 92,021 annual cases [13]. Moreover, the number of deaths has been increasing, with 12,759 in 2020, recording the highest number [14]. An improved prognosis for prostate cancer has further emphasized the clinical significance of proper PUI management following RARP, the predominant surgical procedure for prostate cancer. However, previous evidence of PUIs has been mostly based on research performed in Europe and the United States, with only limited reports available from Asia. In addition, Japanese clinical research on the duration of PUI has progressed mostly in university hospitals, and information is lacking on PUI management in general community hospitals. This is an important perspective given that RARP has been widely preformed outside of university hospitals, at least in Japan. Therefore, we aimed to investigate the duration of PUI after RARP at Jyoban Hospital, a community hospital that has conducted a large number of RARP procedures, and its associated factors.
## 2.1. Setting and Participants
This investigation was conducted at Jyoban Hospital of Tokiwa Foundation in Iwaki City, Hamadori Region of Fukushima Prefecture. The population of Iwaki City was approximately 320,000 as of October 2019. It has traditionally been regarded as a remote area, suffering from a physician undersupply in the long term: specifically, its number of medical doctors was 167 per 100,000 population in 2018, compared to the Japanese national average of 247 per 100,000 in the same year; and the average age of medical doctors was 56.4 years old in 2018, compared to the national average of 49.9 years old in the same year. In these difficult circumstances, the Tokiwa Foundation took over the operation of Jyoban Hospital in 2010 from Iwaki City, and the hospital has developed over time during the last decade. During this process, its urology department has taken the lead, expanding into one of the largest community-based urological departments in Japan at present. Indeed, the Department of Urology now has the latest version of the Da Vinci operation system, Da Vinci Xi (Intuitive Surgical Inc., Sunnyvale, CA, USA), and conducts various types of robot-assisted laparoscopic surgery, including RARP. In 2018, the department saw the hospitalization of 638 patients with prostate cancer, the fourth highest number in the country for prostate cancer treatment, with 113 RARPs performed in 2019.
In this study, we considered the patients who underwent RARP from 1 April 2019 to 31 March 2021. When considering the detailed procedure, an indication for nerve sparing was made separately for the left and right sides. Nerve sparing was performed for low- and intermediate-risk patients, according to D’Amico’s classification, on the side where no cancer was detected on biopsy or MRI. Along with this principle, we took the patient’s wishes into consideration when making a comprehensive decision on whether nerve sparing could be performed. Further, lymph node dissection (LND) was not performed in most of our patients. This was because there is little evidence that lymph node dissection in prostate cancer could provide additional benefits to the patient receiving surgery, and it would rather increase the risk of lower-extremity edema. In this sense, while lymph node dissection would lead to accurate staging, the direct therapeutic benefit is unknown as it is associated with poor perioperative outcomes [15]. In our institution, the procedure was performed by multiple surgeons, i.e., a primary surgeon and two or three assistants (according to chart data), and it may be performed by an experienced surgeon or by residents under the guidance of experienced surgeons.
## 2.2. Data Extraction
From the medical records of Jyoban Hospital, we extracted data on the following: the dates of RARP and initial outpatient visits confirming RARP recovery; age; presence or absence of type 2 diabetes; history of alcohol consumption and smoking; presence or absence of transurethral prostatic surgery for benign prostatic hyperplasia; presence or absence of preoperative radiotherapy; presence or absence of preoperative urinary incontinence; systolic and diastolic blood pressure; height; weight at surgery; body mass index (BMI); obesity, defined as body mass index of 25 or above; albumin level; initial prostate-specific antigen (PSA); preoperative Gleason score; D’Amico’s classification; pathological T stage; presence or absence of lymph node dissection; main operator; presence or absence of nerve sparing; postoperative complication of inguinal hernia and intestinal obstruction; presence or absence of continued pelvic floor muscle exercises.
The duration of PUI was defined as the number of days elapsed from the date when the RARP was conducted to the date of the shortest outpatient visit when the physician in charge confirmed the recovery from PUI among those considered. We defined PUI recovery as when the two following conditions were met: [1] the patient was aware that their PUI had improved and [2] they changed their urinary incontinence pads less than or equal to 1 pad per day [3]. The patients who used incontinence pads but did not change them were considered to have improved their PUI because they may have used them as precautionary measures. If the records of the degree of PUI diverged between a doctor and a nurse, the one with the lower grade was selected. Patients for whom the date of the outpatient visit could not be verified and patients for whom both the number of urinary incontinence pad changes and the number of PUI could not be verified were excluded.
## 2.3. Analysis Method
We conducted two analyses in this study. First, we estimated the rate of PUI recovery following the RARP using the Kaplan–Meier product limit method. Then, we constructed a Cox proportional hazard regression model for PUI recovery to evaluate its associated factors. We considered all the sociodemographic and clinical variables as covariates, using the backward stepwise variable selection method (inclusion criteria, $p \leq 0.1$). The covariates with a small number of participants were re-grouped, as necessary. As a sensitivity analysis, we employed a multiple imputation method to fill in missing values for all the covariates. Based on an assumption of missing at random, we constructed the model 10 times using a Markov chain Monte Carlo method and integrated the results. All the data were analyzed with Stata version 15.0 (College Station, TX, USA).
## 3. Results
A total of 214 patients underwent RARP, and the analysis was performed on 209 patients after excluding five patients with missing values in the outcome (i.e., the time interval between surgery and urinary continence).
Sociodemographic and clinical patient information is shown in Table 1. The median age of the patients was 71 years (interquartile range 67–76), $11.5\%$ ($$n = 24$$) had diabetes, $9.6\%$ ($$n = 20$$) had preoperative urinary incontinence, and their median BMI was 24.4 (interquartile range 22.2–26.2), with 85 patients ($40.7\%$) being obese. The median value of albumin and initial PSA were 3.9 g/dL (interquartile range 3.7–4.1) and 8.9 ng/mL (interquartile range 5.9–16.0). The most common preoperative Gleason score was 7, with a proportion of $45.0\%$ ($$n = 94$$); $48.8\%$ of the patients were diagnosed as high risk according to D’Amico’s classification before RARP, and $84.7\%$ of the patients were diagnosed as pathological T2 after RARP. Further, $64.1\%$ ($$n = 134$$) of the patients were operated on by experienced doctors, and unilateral and bilateral nerve sparing was achieved in $45.7\%$ ($$n = 91$$) and $5.5\%$ ($$n = 11$$) of the patients, respectively. Postoperative complications of inguinal hernia and intestinal obstructions occurred in $7.7\%$ ($$n = 16$$) and $2.4\%$ ($$n = 5$$) of the patients. Lastly, pelvic muscle exercise was performed in $93.8\%$ ($$n = 195$$) of the patients.
Figure 1 shows the results of the Kaplan–Meier survival analysis curve. The rates of urinary continence were evaluated at 30 days (4 weeks/1 month), 90 days (12 weeks/3 months), 180 days (24 weeks/6 months), and 365 days (48 weeks/12 months), with recovery rates of $5.7\%$, $23.4\%$, $64.6\%$, and $93.3\%$, respectively.
Table 2 shows the results of univariable and multivariable Cox proportional hazards regression analysis. After an adjustment, those with preoperative urinary incontinence experienced significantly slower PUI recovery than those without preoperative urinary incontinence (hazard ratio 0.28, $95\%$ confidence interval 0.14–0.57). Patients with high albumin levels had slower recovery from PUI than those with low albumin levels (hazard ratio 0.54, $95\%$ confidence interval 0.35–0.81). In addition, recovery from PUI was slower after surgery performed by residents compared to surgery performed by experienced physicians (hazard ratio 0.61, $95\%$ confidence interval 0.44–0.86). In contrast, those with bilateral nerve sparing experienced PUI recovery significantly sooner than those with no nerve sparing (hazard ratio 2.87, $95\%$ confidence interval 1.43–5.77), while those with unilateral nerve sparing also tended to experience PUI recovery sooner than those with no nerve sparing (hazard ratio 1.35, $95\%$ confidence interval 0.97–1.87). The sensitivity analysis using the multiple imputation method did not converge and we could not obtain reasonable findings.
## 4. Discussion
In this study investigating PUI recovery following RARP in a community hospital in a remote area suffering from a physician undersupply in Japan, we primarily found that rates of PUI recovery at 90 days (12 weeks/3 months) and 365 days (48 weeks/12 months) were $23.4\%$ and $93.3\%$. We also found that preoperative urinary incontinence, higher albumin levels and surgery performed by unexperienced surgeons were associated with delayed PUI recovery, while nerve sparing was significantly associated with early recovery. This study predominantly presents important knowledge to medical institutions in a similar rural community setting in Japan, but, in the meantime, we believe that its implications could be valuable beyond this setting.
With regard to PUI recovery at 365 days, the obtained findings were no worse than the previous findings reported in the systematic review by Ficarra et al., where the proportion of PUI recovery at 12 months ranged from 89 to $92\%$ [3]. We may have overestimated the proportion of PUI recovery in this study, given that our definition of PUI recovery was liberal, allowing the use of one pad per day, while some previous studies implemented a definition of no pads per day as the definition [3]. Nonetheless, it is notable that our observation was superior to that of all previous studies implementing a definition of no more than one pad per day for PUI recovery in the same systematic review [3]. In this respect, it is reasonable to say that at least we may have achieved outcomes comparable to the previous studies at 365 days.
In contrast, our findings of PUI recovery at 90 days ($23.4\%$) were inferior to the figures reported in previous studies. Indeed, Ficarra et al. reported in their systematic review that the proportion of patients experiencing three-month PUI recovery was $65\%$ [3]. It is difficult to conclusively determine the underlying mechanism of the disparity in the findings of ours and the previous studies. However, our observations about nerve sparing could provide important clues to understand this phenomenon. In our study, nerve sparing was associated with early PUI recovery. However, the proportion of patients with bilateral nerve sparing was relatively low at only $5.5\%$, primarily due to the high-risk profile of the patients, with only $11.0\%$ classified as low risk according to D’Amico’s classification. A contrasting phenomenon was observed in a previous study by Kim et al., which showed that $53.9\%$ ($\frac{285}{529}$) of the patients experienced bilateral nerve sparing and that $60\%$ of them experienced PUI recovery by 12 weeks [16]. Further, given that the effect of nerve sparing appeared to have been strong during the earlier phase of PUI recovery [16], the low proportion of bilateral nerve sparing may have been a primary contributor to the delayed PUI recovery.
In addition, the presence of preoperative symptoms of urinary incontinence may have delayed the PUI recovery in this study. Ficarra et al. reported that preoperative lower urinary tract symptoms delayed the recovery from PUI [3]. However, given that only a limited proportion of the patients experienced preoperative urinary incontinence in this study, its contribution to delayed PUI recovery may have been limited.
In our study, surgery performed by inexperienced surgeons led to delayed PUI recovery, which is in line with multiple previous studies. While the acceptance of young doctors and their on-the-job training is a critical factor allowing hospitals located in rural settings to sustain their workforce and provide necessary care for local residents, it is also important to minimize the drawbacks resulting from this process. Jyoban Hospital recently implemented the dual console of the Da Vinci surgical system, which allows a senior doctor to provide real-time supervision of an operation conducted by younger surgeons.
Moreover, the patients with higher albumin levels had a longer recovery time from PUI, which has not been reported in the previous literature [17]. Furthermore, this finding differs from what one would intuitively expect from surgical findings regarding wound healing and requires further investigation.
It is also important to note any potential implications of PUI recovery following RARP in rural community settings. Indeed, our finding at 12 months was rather superior to that observed in a Japanese elite university hospital: Hakozaki et al. reported that only $85.0\%$ of their patients experienced PUI recovery one year after RARP [18]. This means that rather than the status and ranking of hospitals, the experience and proficiency of surgeons affects the outcomes of patients, which is a fact explained above.
However, rural community hospitals face clear disadvantages compared to university hospitals, such as the challenge of organizing a comprehensive support framework for patients that involves multiple hospital staff. In this study, almost all patients performed preoperative pelvic floor muscle exercises, but no early improvement was seen. One reason for this is that we could not assign a single instructor to each patient but rather had three (or more) instructors working on rotation; therefore, the instructions for the exercises were not consistent. In addition, the instructions were mainly oral explanations; thus, the patients’ pelvic floor muscular contraction during the exercise was not accurately confirmed. Furthermore, in elderly patients, their understanding of the exercise could have been insufficient. These factors may have contributed to the inadequate effectiveness of the exercise, and we believe that a lack of manpower prevented us from providing better training to PUI. However, it has been demonstrated that pelvic floor muscle exercises are clinically significant [19,20,21], particularly in the early phase after surgery [12]. Thus, it is desirable to improve the way in which we teach pelvic floor muscle exercises so that every patient can enjoy their benefits. For example, the use of pelvic floor muscle exercise pamphlets could be useful in explaining the purposes of standardizing and unifying the content of instruction by physical therapists.
## 5. Limitations
There were several limitations in this study. First, this was a single-institution study with only a limited number of patients. This could have limited the generalizability of the observed findings and resulted in the omission of some important factors in the regression analysis, such as obesity [22], but this is the first study investigating PUI recovery after RARP in a Japanese community setting, which is an important novelty of the study. Second, we did not evaluate various confounding factors, such as anatomical ones (prostate size, preoperative urethral length, and maximum urethral closure pressure) and surgical procedure methods. As a result, the findings of the regression analysis may have been limited. Third, the definition of PUI recovery relying on the self-reported count of the urine pads may be affected by various biases. For example, the usage of urine pads may have not been standardized among the patients, which may have affected the counts of the pads used among the patients.
## 6. Conclusions
In this study, which examined the recovery from PUI following RARP in a Japanese community setting, we found that PUI was eliminated in $93.3\%$ of the patients at 365 days, which was comparable to previous reports. However, recovery at 90 days was observed in only $23.4\%$ of the patients, which was slower than reported in previous studies. Our analysis revealed that preoperative urinary incontinence, higher albumin levels and surgery performed by inexperienced surgeons were associated with a delayed recovery from PUI. On the other hand, nerve sparing was significantly associated with an earlier recovery from PUI. |
# Analysis and Evaluation of Dental Caries in a Mexican Population: A Descriptive Transversal Study
## Abstract
Oral diseases are an important public health problem owing to their high prevalence and strong impact on people, particularly in disadvantaged populations. There is a strong relationship between the socioeconomic situation and the prevalence and severity of these diseases. Mexico is among the countries with a higher frequency range in oral diseases, highlighting dental caries, which affect more than $90\%$ of the Mexican population. Materials and method: A cross-sectional, descriptive, and observational study was carried out in 552 individuals who underwent a complete cariogenic clinical examination in different populations of the state of Yucatan. All individuals were evaluated after providing informed consent and with the consent of their legal guardians for those under legal age. We used the caries measurement methods described by the World Health Organization (WHO). Prevalence of caries, DMFT, and dft indexes were measured. Other aspects were also studied, such as oral habits and the use of public or private dental services. Results: The prevalence of caries in permanent dentition was $84\%$. Moreover, it was found to be statistically related to the following variables: place of residence, socioeconomic level, gender, and level of education ($p \leq 0.05$). For primary teeth, the prevalence was $64\%$ and there was no statistical relation with any of the variables studied ($p \leq 0.05$). Regarding the other aspects studied, more than $50\%$ of the sample used private dental services. Conclusions: *There is* a high need for dental treatment in the population studied. It is necessary to develop prevention and treatment strategies considering the particularities of each population, driving collaborative projects to promote better oral health conditions in disadvantaged populations.
## 1. Introduction
Oral diseases constitute a significant public health problem because of their high prevalence and strong impact on people and society in terms of pain, social, and functional disability [1]. Currently, nine out of ten people in the world are at risk of suffering from an oral disease [2,3,4,5].
Mexico is among the countries with a high frequency range in oral diseases. The prevalence of caries affects more than $90\%$ of the Mexican population [6]. According to the Universal Catalogue of Health Services (CAUSES), the Mexican state offers medical coverage that also includes dental specialties [7].
If we focus on the Yucatan region (which belongs to the south-eastern area of Mexico), according to data from the General Direction of Epidemiology of the Ministry of Health of the Government of Mexico, in 2019, the region had rates of oral diseases comparable to the rest of the country and markedly inferior to other North American (United States of America) or European (United Kingdom or Sweden) countries [8].
Thus, we find an exaggerated high proportion of children with ECC receiving health services (31.8 vs. $6\%$ in the USA) and a caries index (DMFT) at 12 years of age of 2.6 vs. 1.2 in the USA or 0.8 in Sweden. Although in adults aged 35–44 years, the data are similar in terms of DMFT, the percentage of fillings (the so-called restoration index) is markedly higher in the countries mentioned above compared with the Yucatan region ($20\%$ compared with $52\%$ in the United Kingdom or $63\%$ in the USA). Even greater is the difference in terms of edentulism or lack of functional occlusion in adults aged 65–74 years ($55.8\%$ of non-functional occlusion in the southeast region of Mexico compared with $38.2\%$ in the USA) [9,10,11].
Worldwide, the incidence of oral diseases, particularly in disadvantaged populations, remains high [12,13]. Among the main ones, we highlight decayed teeth as the most prevalent, followed by periodontal conditions, malocclusions, and oral trauma, which affect the quality of life of those who suffer from it [2].
Dental caries is defined as a multifactorial chronic disease, which develops under the following conditions; a susceptible host; a cariogenic oral flora; and an appropriate substrate that must be present for a specified period of time and that, in turn, will be influenced by the community, family, and individual predisposition [14,15,16,17].
Caries experience is the number of teeth/surfaces that have caries lesions (at a specified threshold), restorations, and/or are missing owing to caries, accumulated by an individual up to a designated point in time. Though new models or indexes are being explored internationally, the majority of studies measure the caries experience by means of DMFT/S (dft/s) at varying detection levels [18].
Peres and Cos [2009] stated that there is a very strong and persistent relationship between socioeconomic status and the prevalence and severity of oral diseases [3,12]. This is usually linked to a cariogenic diet and poor oral hygiene, as well as the consumption of tobacco and alcohol and low accessibility to oral health services. In addition to other factors such as dental malposition, parental education and associated systemic diseases usually coexist with a lack of oral hygiene [12,19,20,21].
Concerning caries, prevention through proper oral hygiene, a non-cariogenic diet, and topical fluoride is the most effective method to decrease its development. Early detection would also avoid severe complications such as advanced caries, pulpitis, endodontic treatments, and loss of teeth [22]. Caries prevention has traditionally meant inhibition of caries initiation, otherwise called primary prevention. Primary, together with secondary and tertiary prevention, comprise non-operative and operative treatments for caries management [18].
The main objective of this research was to analyse the prevalence and index of dental caries in primary and permanent dentition defined by type of population, rural or urban, among populations of the state of Yucatan, Mexico.
## 2.1. Study Type and Settings
An observational, cross-sectional, and descriptive study carried out as part of the “Yucatán International Cooperation Project” was developed in Temax, Hunucmá, Umán, and Mérida.
The study sample consisted of 552 individuals between 5 and 64 years old who requested dental care. All participants signed an informed consent form and filled out an individual survey on oral health, oral hygiene habits, and quality of life. The information from underage patients was collected by their parents or legal guardians once the consent was signed. In addition, each individual underwent a complete clinical dental examination focused on cariogenic pathology.
The World Health Organization (WHO) criteria for dental caries and care needs related to the condition of the teeth were applied [23]. All of the participants in the study were examined in natural light and a no. 5 flat mirror was used. The participants brushed their teeth before the examination and the teeth were not dried prior to the inspection.
All patients had the same clinical examiner (A.M.), with the same methodology used on all of them them to avoid bias. It was decided to carry out the scanning for data collection through the work from a single examiner (a dentist with lots of experience on caries assessment). With the aim to measure the consistency of the observations, the examiner was subjected to a so-called intra-observer calibration, obtaining the ratio of agreement with a Kappa test (0.85).
## 2.2. Study Variables
The variables analysed were age, sex, place of residence (urban or rural area), socioeconomic level (low rural, low urban, or urban environment), and highest level of studies achieved.
The variables obtained from the questionnaire on oral health attitudes and habits and the use of dental health services were also studied. For the clinical variables of caries indexes, the prevalence of caries and the DMFT index for permanent dentition and dft for primary dentition were studied according to WHO criteria of caries [23].
## 2.3. Statistical Analysis
Statistical analysis was performed using STATA V15 (College Station, TX, USA). Continuous variables were summarized through means and standard deviations (SDs). The categorical variables are presented through the frequency distribution and the simple and cumulative frequencies are reported in percentages.
Associations between prevalence and cavity rates were studied with age, sex, area of origin, socioeconomic level, and education. ANOVA was used for continuous variables and the chi-square test was performed for categorical variables. The critical value to identify statistically significant differences was $p \leq 0.05.$
## 3.1. Sociodemographic Data
The mean age of the population was 28.8 ± 16.2. Four age groups were categorized: children 6–12 years, adolescents 12–19 years, young adults 20–34 years, and older adults 35–64 years. Among the age groups chosen for the study, the so-called “older adults”, made up of subjects between 35 and 64 years of age, represented the largest group, with $37.32\%$. In terms of gender, women accounted for $60.51\%$ of the sample, while men accounted for $39.49\%$.
Concerning other socio-demogaphic data, fifty-six percent of the population studied was rural, with a low socioeconomic level. Twelve percent of the sample reported having no education. Of the $92\%$ who had completed studies, $83\%$ had completed primary, secondary, or high school and only $17\%$ had reached university level. All these data are summarised in Table 1.
## 3.2. Oral Health Attitudes and Practices
The following are some of the most significant results from the oral health attitudes and practices survey of the study participants.
More than half of the surveyed population is very concerned about their oral health ($58\%$), while $7\%$ of them acknowledge having little concern for their oral health (Table 2).
Fifty-four percent of individuals reported brushing three times a day, $34.8\%$ twice a day, and $7\%$ only once a day. In the group of women, $55.9\%$ reported brushing three times a day and this percentage was lower in the group of men (38.5), ($p \leq 0.001$) (Table 3).
Furthermore, $90.45\%$ of the study population used a manual toothbrush. Only three subjects used an electric toothbrush. Sixty-four percent of the population used a toothpaste as the main complementary product for toothbrushing. Almost one in five subjects used mouthwash, while dental floss was only used by $12\%$ of the population (Table 4).
## 3.3. Use of Dental Services
With regard to the frequency of dental check-ups, it can be seen that the most frequently answered response by the population to the question regarding when they should visit their dentist was “when they have a problem” (Table 5).
Moreover, $8.15\%$ of the population acknowledged that they had never visited a dentist, while $20.29\%$ had done so less than six months ago. The rest of the population under study had visited their dentist more than one year ago (Table 6).
More than half of the population used private services, while $35.59\%$ used the public dental services made available by the state. Ten percent stated that they were not aware of the difference between the two types of care (Table 7).
## 3.4. Dentition Status
For primary dentition, the prevalence of caries was $64\%$, with a dft value of 2.8 ± 3.19. More specifically, for children aged 5 and 6 years old, the prevalence obtained was $55\%$, obtaining a dft value = 2.45 ± (3.21). Analyzing the relationship between the dft index and the socio-demographic variables, it is observed that there is no statistical significance ($p \leq 0.05$ in all cases) (Table 8 and Table 9).
Regarding permanent dentition, $94\%$ of the population ($$n = 520$$) had at least one permanent tooth in the mouth. The prevalence of caries was $84\%$, with a DMFT value of 6.3 ± 0.24. The prevalence of caries in individuals aged 12 years was also calculated, which was $54\%$, with a DMFT value of 1.1 (±1.11). Regarding the relationship between the DMFT index and socio-demographic variables, it is observed that there is a statistically significant relationship between DMFT and age, type of residence, socioeconomic level, and educational level ($p \leq 0.05$) (Table 10 and Table 11).
## 4. Discussion
According to the WHO, dental caries is the most prevalent disease in the world, affecting more than $80\%$ of the world’s population, in addition to being considered the most prevalent pathology in the child population [24,25].
For the selection of our simple, and despite having studied a relatively large number of patients, we took a number of patients who came to the Yucatan International Cooperation Project requiring dental care. This could be a limitation of our study, as they were people with a perceived need for treatment and thus would not be representative of the whole population, which is a limitation to be taken into account when interpreting the results. Nevertheless, the authors believe that it serves to show a snapshot of the oral health of the Yucatecan population.
## 4.1. Primary Dentition
The prevalence of carious lesions in primary dentition in the Yucatecan population studied was $64\%$, a figure similar to that in the work carried out by the Mexican group of Montero and Cols. [ 26]. According to the results of Martínez-Pérez and Cols. and Serrano-Piña and Cols., 5 out of 10 children present caries in primary dentition, while for Villalobos and Cols. or García Pérez and Cols., up to 9 out of 10 children have dental caries [27,28,29,30].
In the present investigation, no statistically significant differences ($p \leq 0.05$) were found, but a notable association between a higher prevalence of decayed teeth and the rural environment was found, matching with previous studies [30]. Regarding gender, a higher prevalence of caries was found in the female sex and, with respect to the socioeconomic level, $81\%$ of individuals who presented decayed primary teeth belonged to the “low rural” level; in this case, no statistically significant differences were found with the prevalence of caries, like those confirmed by Frencken et al. in their study [31].
The total value of dft for the studied population was 2.8 ± 3.19. The carious component was 2.69 ± 3.08 and the filled component was 0.11 ± 0.11, which shows a high need for treatment, finding more than two untreated caries in each individual. The value of our results with respect to dft in rural areas is similar to those of Medina-Solís and Cols. They obtained a dft of 2.86 in a sample with children from 6 to 12 years old in a non-urban area in Campeche, a state adjacent to Yucatán. In relation to the urban areas, we obtained a value of 2.7 for dft, compared with the value of 2.4 obtained for the dft index in the above-mentioned study. It is important to note that, if teeth lost as a result of caries had been considered, the dft value could have increased and could resemble the results of Romo and Cols. or Villalobos and Cols. [ 29,32,33].
Although no statistically significant differences were found with the socio-demographic variables, the authors observed that the value of the dft index was higher in men (3.1) than in women (2.3); that the “low rural” socioeconomic group obtained better dft index values than the “low urban”; and, in terms of schooling, individuals without schooling obtained higher levels of dft than those who had studied primary school, at 3.6 and 2.3, respectively.
Following the WHO instructions, 5- and 6-year old children were specifically studied. Although the sample volume is low ($$n = 22$$), given that the data collection was carried out in a random sample, the prevalence was $55\%$, presenting a dft of 2.45 ± 3.21. The results of the present research in terms of prevalence are similar to those of the National Dental Caries and Fluorosis Surveys in Mexico [34]. In the case of dft, the difference is more evident, as it was greater in our study than in the results of the survey (1.5 vs. 2.45). Regarding the decayed component, the value obtained in our research was 2.36 ± 3.23, while the that of the national survey was 1.3. This could be justified because the target population of the project was the one with the least economic resources. The sealed component was 0.09 ± 0.29, highlighting the existing treatment needs in this sector.
## 4.2. Permanent Dentition
On the other hand, the prevalence of caries in permanent dentition was $84\%$, a result similar to that of other studies with the same methodology carried out in other regions of Mexico, Romo and Cols. in Nezahualcóyotl, Aamodt and Cols. in Chiapas, and Islas-Granillo and Cols. in Hidalgo [30,32,35,36].
The results led to a greater presence of decayed teeth in urban populations compared with rural ones ($$p \leq 0.027$$), a fact that coincides with the results published by Ortega-Maldonado et al. [ 37]. Regarding the socioeconomic level, there was a significant association between this variable ($$p \leq 0.031$$) and the prevalence of caries; the lower income level coincides with a greater presence of this pathology, which agrees with other studies published in Mexico by Villalobos-Rodelo and Cols., in addition to the one published by Vega-Lizama and Cols. [ 12,38,39,40].
In relation to schooling, a statistically significant relationship was found with the prevalence in permanent dentition ($$p \leq 0.001$$), as well as with gender ($p \leq 0.05$), as it was more prevalent in the female sex; these results coincide with other international studies [33].
The value of the DMFT index obtained in our population was 6.3 ± 0.24. Regarding this, the decayed component was 4.1 ± 3.91, the absent one was 1.3 ± 2.78, and the obturated one was 0.9 ± 2.16. According to our study, only one in six decayed teeth are filled in the population of our study, showing once again that the need for dental care in our sample is high. Other results obtained by Mexican researchers such as Aamot and Cols. only corroborate the high demand for care that exists in this population [35,39].
The DMFT value was higher in women, a fact that coincides with other studies such as that of Romo and Cols. In addition, statistically significant differences ($p \leq 0.05$) were found with the age variable, with a clear tendency for the DMFT value to increase over time, coinciding with other results published in international literature [32,37]. Regarding the variable place of residence, the urban population obtained higher DMFT levels than the rural population ($p \leq 0.05$). Regarding the socioeconomic level, the population of the low urban group obtained the highest DMFT value ($p \leq 0.05$). Both statistically significant associations may be due to the greater access, by the urban population, to products with large amounts of refined sugars and the high intake of carbonated beverages that occurs in this sector [38,40].
Following the WHO instructions concerning age analysis, 12 year olds were also specifically studied and, although the sample size ($$n = 26$$) was low because of the random selection of the population, the prevalence of caries was $46\%$, with a DMFT value of 1.1 ± 1.11. The decayed component was 1.0 ± 1.75, that of missing teeth was 0.38 ± 0.19, and that of filled teeth was 0.03 ± 0.19. The data obtained in the last National Dental Caries and Fluorosis Survey in Mexico for the 11–12-year-old group showed a $47\%$ prevalence of caries in permanent dentition and a DMFT of 1.5, coinciding almost exactly with the results obtained in the present study [34].
## 4.3. Oral Health Habits and Use of Dental Health Services
Oral health is a determinant of quality of life and the acquisition of preventive habits such as toothbrushing can reduce a large number of oral problems. This adoption of preventive habits clearly increases the likelihood of being in optimal health and has been shown to be significantly influenced by socioeconomic and demographic factors [4,5].
With this initial premise, basic issues such as the frequency of tooth-brushing three times a day and the use of fluoride toothpaste are widespread among the population. However, there is still a sector of the population (the so-called fourth world) that, for different socio-economic and/or cultural reasons, has markedly lower rates of oral hygiene habits than the rest of the population [4,5].
In our study, we have seen a clear relationship between the level of education and the caries index, with a statistically significant difference between both aspects ($p \leq 0.001$), which is in agreement with similar studies in other areas of Latin America or the rest of the world [41,42,43].
It is, therefore, socio-cultural issues that mark the acquisition of health habits in general and oral health in particular. In our sample, around $90\%$ of the population brushed their teeth more than twice a day. This percentage is very similar to that of populations with a higher socioeconomic level, such as the Swedish [43] or Spanish [41] population. It seems that these habits are widely acquired by the Yucatecan population.
The use of topical fluorides in the form of toothpaste or mouthwash has been shown to be a major advance in caries control and their widespread use has lowered caries rates worldwide. The percentage of people using fluoride products in our sample was around $80\%$, somewhat lower than in the countries mentioned above [41,42,43].
With regard to the use of dental health services, the study population reported using mostly private clinics, having visited a dentist in the last two years at a rate of around $60\%$. The use of these health services is markedly different in other countries. In the USA, almost the entire population uses private dental services, while in European countries, this percentage varies according to the portfolio of services offered by the different countries [41,42,43].
## 5. Conclusions
The present investigation reflects a high need for treatment in the served area of Yucatan, finding more than two untreated caries per individual, with a significantly higher prevalence of decayed teeth in rural areas, among those with low-income levels, and in women. It is crucial to generate a prevention and treatment strategy considering the particularities of each population, improving collaborative projects to promote better oral health conditions not only in the Mexican population, but also in the international arena. |
# Personality Determinants of Exercise-Related Nutritional Behaviours among Polish Team Sport Athletes
## Abstract
A proper diet increases the effectiveness of training and accelerates post-workout regeneration. One of the factors determining eating behaviour are personality traits, including those included in the Big Five model, i.e., neuroticism, extraversion, openness, agreeableness, and conscientiousness. The aim of this study was to analyse the personality determinants of peri-exercise nutritional behaviours among an elite group of Polish athletes practicing team sports. The study was conducted in a group of 213 athletes, using the author’s validated questionnaire of exercise-related nutrition behaviours and the NEO-PI-R (Neuroticism Extraversion Openness-Personality Inventory-Revised). A statistical analysis was performed using Pearson’s linear correlation and Spearman’s rank correlation coefficients as well as a multiple regression analysis, assuming a significance level of α = 0.05. It has been shown that the level of the overall index regarding normal peri-exercise eating behaviours decreased with increasing neuroticism (r = −0.18) and agreeableness (r = −0.18). An analysis of the relationship between the personality traits (sub-scales) of the Big Five model demonstrated that the overall index of proper peri-exercise nutrition decreased with the intensification of three neuroticism traits, i.e., hostility/anger (R = −0.20), impulsiveness/immoderation (R = −0.18), and vulnerability to stress/learned helplessness (R = −0.19), and four traits of agreeableness, i.e., straightforwardness/morality (R = −0.17), compliance/cooperation (R = −0.19), modesty (R = −0.14), and tendermindedness/sympathy (R = −0.15) ($p \leq 0.05$). A multiple regression analysis exhibited that the full model consisting of all the analysed personality traits explained $99\%$ of the variance concerning the level of the proper peri-exercise nutrition index. In conclusion, the index of proper nutrition under conditions of physical effort decreases along with the intensification of neuroticism and agreeableness among Polish athletes professionally practicing team sports.
## 1. Introduction
Proper nutrition is an important factor determining exercise capacity and the effectiveness of post-exercise restitution processes [1,2,3,4,5]. They concern the time, quantity, and type of meals, snacks, and liquids consumed before, during, and after physical exercise, taking the specificity of the discipline and individual pre-dispositions as well as food preferences of the competitor into account [5]. Nutrition before training or a competition should be focused on adequate hydration and nutrition, during exercise, on replenishing fluids and energy losses, and after exercise, on accelerating post-exercise regeneration. A greatly significant aspect of peri-exercise nutrition is proper hydration, which is achieved by consuming water and isotonic drinks [1,2,3,4,5,6,7,8,9]. Pre-workout meals should be rich in carbohydrates (with different glycaemic indices) and low-fat protein products, as well as vitamins and mineral salts [5]. Before prolonged exercise (> 60 min), an additional energy reservoir may come from a carbohydrate snack [10]. Nutrition during post-exercise recovery should help restore disturbed homeostasis, optimise water and electrolyte balance, and aid the resynthesis of muscle and liver glycogen, while managing the acid-based balance and replenishment of cellular protein losses [1,2,3,4,5,11]. Indicators of nutrition and hydration status are among the significant biomarkers related to the health, performance, and post-exercise regeneration of athletes [12].
Meanwhile, in research among athletes, numerous quantitative and qualitative nutritional irregularities have been indicated. In this regard, a low supply of carbohydrates, vitamins (including antioxidants), and mineral salts (including potassium and calcium) has been found [13,14,15,16,17]. These are ingredients that play a key role in the energy of physical exercise, skeletal muscle contraction activity, and reduction of oxidative stress [1,10,18,19]. The described nutritional deficiencies may be associated with the insufficient consumption of products that have a high nutritional value, which has been indicated among athletes at research centres in different countries [20,21,22,23,24,25,26].
In the past few years, the health and nutritional behaviour of various population groups, including athletes, have been negatively affected by the COVID-19 pandemic [27,28,29,30]. At the same time, health training and a varied, balanced diet, rich in, among others, vegetables, fruits, and fish, containing immunostimulating ingredients (e.g., vitamins C and D and omega 3 PUFAs), can support the immune system and reduce health risks [29,31]. In endurance athletes, the relationship has been described between a rational diet, physical activity, and an improvement of physical capacity as well as body composition after a mild COVID-19 infection [32].
The nutritional behaviour of athletes is dynamic and conditioned by numerous factors, including personality [33,34,35]. Personality is one of the important aspects of human functioning in personal and social dimensions, related to, among others, cognitive, emotional processes, motivation, undertaken tasks, and achieving success. Personality determines the consistency of predispositions, mental functions, and the behaviour of individuals [36,37]. One of the dominant personality models in the psychology of traits is the Big Five model created by Costa and McCrae, which includes five main personality dimensions (neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness), and their sub-categories [38,39]. Neuroticism describes a person’s level of emotional stability and resilience. People who score high in this dimension are sensitive and more frequently experience negative emotions, such as fear, anger or sadness, while people with low neuroticism are self-confident and emotionally stable. Extraversion refers to a person’s level of sociability, enthusiasm, and assertiveness. People who score high in this dimension tend to be outgoing, talkative, and energetic, while low scorers tend to be more reserved and introverted. Openness to experience refers to a person’s level of curiosity, creativity, and willingness to experiment. High scorers in this dimension are creative, open-minded, and interested in new experiences, while low scorers are more conventional and practical. Agreeableness is related to a person’s level of kindness, compromise, and empathy. People who score high on this dimension tend to be friendly, compassionate, and cooperative, while people who score low in this dimension tend to be more competitive and suspicious. Conscientiousness refers to a person’s level of self-discipline and responsibility. People who score high in this dimension tend to be responsible, effective, and goal-focused, while people with low scores tend to be more easy-going and less organised. In this way, the Big Five personality model allows for multi-faceted personality characteristics and explains socially and culturally significant behaviours that depend on the configuration of several personality traits at the same time [36,37]. Due to the significance of personality for success in sports, the personality assessment of athletes is an important area of sport psychology [40]. The level of neuroticism, extraversion, agreeableness, and conscientiousness can affect the results of competition in individual sports, although there is no single universal personality profile of athletes [40]. In studies among athletes, low neuroticism has mostly been noted, especially in high-level athletes [41,42,43,44].
Previous Polish research on the relationships between personality traits of the Big Five model and nutritional behaviours among people performing increased physical activity primarily concerned diet health quality among physical education students [45] and diet quality as well as nutritional behaviours among team sports athletes [46,47]. In the cited studies, the authors indicated relationships between the personality dimensions of the Big Five model and indicators of a healthy and unhealthy diet, implementing the qualitative recommendations of the Swiss nutrition pyramid for male athletes practicing team sports. The results of the above-mentioned studies mostly indicate the positive predictive significance of extraversion and conscientiousness, as well as the negative significance of neuroticism for the quality of athletes’ diets [46,47]. Relationships between personality traits and eating behaviours as well as nutritional status have also been the subject of research in population groups other than athletes [48,49,50,51].
To the authors’ knowledge, there is no research on the personality determinants of specific nutritional behaviours among athletes in conditions of physical exertion and post-exercise recovery. Therefore, due to the importance of diet in the peri-exercise period for the capacity of regeneration processes and the effectiveness of these processes, assuming the complexity regarding determinants of nutritional behaviours, a study was carried out on the personality determinants of athletes’ peri-exercise nutritional behaviours. The aim of this research was to analyse the personality determinants of peri-exercise nutritional behaviour among an elite group of Polish athletes professionally training in team sports.
The following research questions were posed: [1] How are athletes’ peri-exercise nutritional behaviours shaped? [ 2] What are the relationships between personality traits and athletes’ peri-exercise nutritional behaviours?
Referring to the results of previous research [45,46,47] and the characteristics regarding the personality dimensions of the Big Five model (including neuroticism, associated with emotional liability, extraversion, regarding positive emotionality, conscientiousness, associated with the ability to control stimulus and being focused on achieving specific goals, and agreeableness, connected with less involvement in performed tasks) [36], a research hypothesis was formulated. It was assumed that personality traits are related to peri-exercise eating behaviours. Along with an increase in the level of extraversion and conscientiousness, the scale of correct eating behaviours also increases, and with the intensification of neuroticism and agreeableness, it experienced a decrease.
## 2.1. Participants
The research was carried out among a group of 213 Polish athletes (males) professionally practicing team sports, including basketball ($$n = 54$$), volleyball ($$n = 53$$), football ($$n = 53$$), and handball ($$n = 53$$). The basic criterion for selection into the study group was practicing sports at a professional level—at the level of the highest league in Poland, and for at least 3 years. The basic criteria for exclusion were belonging to the lower league class and/or failure to meet the criterion of minimum sports experience (3 years). The studied athletes, in relation to the current classification of the level of activity and sports abilities [52], can be assigned to Tier 3 (highly trained/national level). The age of the examined athletes was between 18 and 38 ($M = 26.1$; SD = 4.5), with the sports experience ranging from 3 to 20 years ($M = 8.2$; SD = 4.5). The median number of training sessions per week was 7, and the volume of a single training unit was 90 min. The study was performed in accordance with the principles of the Declaration of Helsinki, after obtaining informed consent from the participants. The research protocol was approved by the Bioethics Committee at the District Medical Chamber in Kraków (No. 105/KBL/OIL/2021).
## 2.2.1. Evaluation of Athletes’ Peri-Exercise Nutritional Behaviour
An original questionnaire regarding qualitative recommendations for peri-exercise nutrition was used to assess the nutritional behaviour of athletes. The questionnaire consists of 15 statements (items) concerning eating behaviours during the peri-exercise period. The responses were evaluated on the 5-point Likert scale (from 1 to 5, from “definitely no”, “rather no”, “hard to say”, and “rather yes” to “definitely yes”). The items included in the questionnaire concerned eating behaviours that are particularly important for post-exercise nutrition strategies, which increase the ability to exercise and the pace of regeneration processes, indicated by the authors of scientific papers in the field of nutritional recommendations for athletes [2,4,5]. The questionnaire enquiries concerned the following: intake of isotonic drinks during exercise, type of meal consumed before and after training, consumption of snacks and the type as well as amount of beverage intake before and after training, including drinks containing carbohydrates and electrolytes, as well as consumption of carbohydrate and protein products after training/competition. The subject of assessment was the athletes’ peri-exercise eating habits (during the previous 6 months). Based on the results of the questionnaire, the degree of implementing individual nutrition recommendations and the overall index of rational nutrition behaviours during the peri-exercise period were assessed (on a scale of 1–75 points, assuming that the higher the index, the more intense the rational peri-exercise eating behaviours). The questionnaire was validated. Test validity was assessed by repeated testing ($$n = 32$$). The value of the linear correlation coefficient was calculated and the H0 null hypothesis was tested: $r = 0$, via the Student’s t-test, obtaining a result confirming reliability of the scale ($r = 0.378$; $$p \leq 0.035$$). Good internal consistency of the scale was also confirmed (Cronbach’s α coefficient was 0.77).
## 2.2.2. Evaluation of Athletes’ Personality Traits
The NEO-PI-R (Neuroticism Extraversion Openness-Personality Inventory-Revised) by P.T. Costa and R.R. McCrae [39] was used in the Polish adaptation by J. Siuta [53]. Characteristics of the NEO-PI-R personality inventory, according to the authors of the original tool and its Polish adaptation [39,53], have been presented in our previous publication [46]. Similarly, the personality traits of the examined group of athletes have already been the subject of our other publication [46]; therefore, they will not be presented in this work.
## 2.3. Statistical Analysis
The collected numerical material was subjected to statistical analysis using the Statistica 13.3 package. Statistical analysis was performed using Pearson’s linear correlation and Spearman’s rank correlation coefficients (depending on the nature of the variables). Multiple regression analysis was also carried out to check which of the variables could explain the level of proper peri-exercise index of nutrition. The stepwise progressive regression procedure (without intercept) was used in the calculations. The analysis also included the calculation of the multivariate determination coefficient (R2) and the standard error of estimation (sy), as well as the values of standardised partial regression coefficients b*, which are a measure of the relative significance regarding individual personality traits (independent variables X) in the model. The analyses were conducted assuming the significance level of α = 0.05.
## 3.1. Athletes’ Peri-Exercise Nutritional Behaviour
With regard to implementing the recommendations of peri-exercise nutrition, it was found that almost all athletes (approx. $98\%$) consumed 200–250 mL of isotonic drinks after training. A high percentage (over $80\%$) consumed fruit and vegetables in their meals before and after training. At the same time, over $70\%$ of the athletes consumed complex carbohydrates in the meal prior to training, 500–600 mL of fluids 2–3 h before training and carbohydrate products after exercise. More than half of the athletes declared the consumption of 1 litre of fluids per 1 h of training and a carbohydrate snack before long-duration training. To a lesser extent (about one-third of the group), the athletes consumed a snack at least 40 min pre-training, a meal at least 2 h before training, 200–600 mL of fluids immediately before training, and complete protein in their pre-exercise meals (Table 1).
The assessment of the peri-exercise eating behaviours (according to median) confirms that, to a high degree, the athletes consumed at least 1 litre of fluids per hour of training (Me = 4), complex carbohydrates in the pre-training meal, vegetables and fruits before training, 500–600 mL of fluids 2–3 h before training, a snack before training lasting more than 2 h, a meal within 30–60 min after training, and carbohydrates in the post-workout meal. Other assessed nutritional recommendations were implemented to a lesser extent and at a similar level (Me = 3.00). The overall index of proper peri-exercise nutrition was 51.9 points (out of 75 max) (Table 2).
## 3.2. Personality Traits and Peri-Exercise Nutritional Behaviour of Athletes
An analysis of the relationship between personality traits and the implementation of peri-exercise nutrition recommendations among athletes showed that the level of the overall index regarding correct eating behaviours (consistent with the recommendations of post-exercise nutrition strategies) decreased with increasing neuroticism (r = −0.18) and agreeableness (r = −0.18). In terms of particular aspects of peri-exercise nutrition, it was shown that with the intensification of neuroticism, the consumption of complex carbohydrates in the pre-workout meal (R = −0.15), snacks before more than 2 h training (R = −0.21), and complete protein consumption (R = −0.20) as well as complex carbohydrates in the post-workout meal decreased (R = −0.14). At the same time, with the intensification of extraversion, the consumption of at least 1 litre of water/isotonic drink for each hour of training decreased (R = −0.17) as well as the consumption of a meal within 30–60 min after ending training (R = −0.15), while the consumption of carbohydrates in the post-workout meal increased ($R = 0.17$). There was also a positive correlation between openness to experience and eating a snack before long-duration training ($R = 0.17$). Simultaneously, along with the intensification of agreeableness, the scale of consuming vegetables and fruits in the pre-training meal (R = −0.17), drinking 500–600 mL of fluids 2–3 h before training (R = −0.14), consuming carbohydrates in the post-workout meal (R = −0.21), and the intake of an isotonic drink in the amount of 200–250 mL every 15–20 min after training experienced a decrease (R = −0.14) (Table 3).
An analysis of the correlations between the personality traits (sub-scales) of the Big Five model showed that the overall index of proper peri-exercise nutrition decreased with the intensification of three neuroticism traits, i.e., hostility/anger (R = −0.20), impulsiveness/immoderation (R = −0.18), and vulnerability to stress/fear/learned helplessness (R = −0.19) and four traits of agreeableness, i.e., straightforwardness/morality (R = −0.17), compliance/cooperation (R = −0.19), modesty (R = −0.14), and tendermindedness/sympathy (R = −0.15) ($p \leq 0.05$) (Table 4).
A multiple regression analysis (dependent variable: overall index of proper peri-exercise nutrition; predictors: personality traits of the Big Five model) indicated that the full model consisting of all analysed personality traits explained $99\%$ of the variance in the level of the index regarding appropriate peri-exercise nutrition, with agreeableness, extraversion, conscientiousness, and openness. The variable with the highest importance was agreeableness (b* = 0.437). The described correlations were directly proportional (Table 5).
## 4. Discussion
In the discussed research, limited implementation has been shown regarding qualitative recommendations for peri-exercise nutrition and significant correlations between some dimensions of personality and peri-exercise nutritional behaviours among elite Polish athletes practicing team sports.
When discussing peri-exercise nutrition, the average level of correct behaviours in this area should be highlighted (51.9 out of 75 points, i.e., $68.5\%$) and the varied level of implementing individual recommendations, including the highest (more than $70\%$ of the group) regarding fluid replenishment before and after exercise, as well as vegetables, fruits, and complex carbohydrates in the pre-workout and post-workout meal. Among the recommendations of peri-exercise nutrition, special importance should be emphasized for supplementing water and electrolytes as well as vegetables and fruits (alkalinizing products, which are, among others, a source of antioxidants, B vitamins, magnesium, potassium, and carbohydrates). This is also true for other carbohydrate products in restoring homeostasis and the optimisation of post-exercise restitution processes, that is, restoring the water–electrolyte and acid-base balance and rebuilding carbohydrate losses (accelerating muscle and liver glycogen resynthesis processes) [1,2,3,4,5]. Consuming appropriate amounts of vegetables and fruits (rich in antioxidant substances, including polyphenols, vitamin C, and carotenoids) contributes to the reduction of oxidative stress indices, i.e., reducing the health risks associated with its level [1,54,55,56,57]. In a situation of high exposure to oxidative stress, in conditions of vigorous physical exercise, a diet rich in dietary antioxidants (including vegetables and fruits) is an important aspect of rational nutrition for athletes [55,58,59,60].
The significance of proper fluid replenishment (and prevention of dehydration in sports) is emphasized by numerous authors [8,9,61,62]. In the research on the subject, the use of isotonic drinks has also been indicated in the effective replenishment of water and electrolyte loss among athletes, including those training volleyball, American football, and rowing [63,64,65]. The subject of numerous studies in the field of sports dietetics was also the assessment of carbohydrate supply as the basic energy substrate in the diet of athletes. In recent meta-analytical studies, the prevalence of low energy and carbohydrate intake among team sports athletes has been confirmed [17]. In other trials, the occurrence has been noted with regard to qualitative nutritional irregularities among athletes, including those practicing team sports [20,21,22,23,25].
Before discussing the relationships between personality traits and exercise-related eating behaviours, it is necessary to point out the basic personality characteristics of the athletes under study. In this regard, it was found that the athletes obtained high scores for extraversion ($M = 121.8$), openness ($M = 115.0$), agreeableness ($M = 123.2$), and conscientiousness ($M = 128.5$), while low scores were observed for neuroticism ($M = 72.1$) [46]. A low level of neuroticism among athletes has also been described in other studies on professional athletes, including those practicing team sports [41,42], also among those from Poland [43], and especially among master-class athletes [44].
The discussed research allowed us to note statistically significant correlations between the personality traits of the Big Five model and their sub-scales and the quality of peri-exercise nutrition among athletes. A negative predictive value regarding neuroticism and agreeableness was found in the overall index of proper peri-exercise nutrition. The correlations found between extraversion and exercise-related eating behaviours were not unambiguous, while within the dimension of openness, a positive relationship was described with one of the aspects of nutrition, i.e., the snack before long-duration training. Conscientiousness and its sub-scales were not related to the quality of peri-exercise nutrition among the studied athletes. The obtained results confirm the difficulties in an unambiguous assessment and interpretation of the relationship between the personality and nutritional behaviours of athletes.
The discussed study *Is a* continuation of our earlier research that was carried out among Polish team sports athletes, which concerned correlations between personality traits and the health quality of the diet (associated with the frequency of consuming products with potentially beneficial and potentially adverse health effects) and with the implementation of the quality recommendations from the Swiss pyramid for athletes [46,47]. The discussed results, indicating the negative predictive significance of neuroticism (and its sub-scales) for normal exercise-related nutritional behaviours, refer to the relationship between neuroticism and a lower health quality of athletes’ diets [46]. Furthermore, among physical education students, the relationship between lower neuroticism and more rational food choices in terms of consuming sea fish was described [45]. In other studies conducted among the general population, it was shown that neuroticism, through the mechanism of emotional eating, promoted the consumption of non-recommended products, including confectionery [48]. The discussed studies on athletes, indicating the negative predictive significance of agreeableness (and its sub-scales), correspond to studies among university students in Ghana, in which a relationship was demonstrated between high agreeableness and irregular eating habits [49]. Agreeableness reduces commitment to performed activities. The described ambiguous relationships between extraversion and some particular exercise-related eating behaviours of athletes training team sports correspond to the positive predictive significance of extraversion as an indicator of a healthy diet of athletes shown in our previous research [46], but also as an indicator of healthy and unhealthy diets among students of physical education [45]. On the one hand, higher extraversion favoured the consumption of vegetables among athletes [46], but also, confectionery products among physical education students [45]. Relationships between conscientiousness and the quality of peri-exercise nutrition were not described, unlike among students of physical education (in them, along with the increase in conscientiousness, the pro-health quality of the diet, expressed by the pro-healthy diet index, pHDI-14, increased) [45]. Positive relationships between conscientiousness and a healthy diet were also noted by other authors in various population groups other than athletes [48,50,51].
It can be concluded that various studies on the personality determinants of eating behaviour in different population groups sometimes provide varied and ambiguous results. Further interdisciplinary research is needed to explain the mechanisms of the observed relationships, which is also pointed out by other authors [51]. Nutritional irregularities found among athletes justify the need to monitor diet and carry out nutritional education, considering the individualisation of influences promoting a healthy way of eating, also in conditions of physical exercise and post-exercise recovery. Learning about the relationships between personality and eating behaviours may be conducive to the personalisation of interactions in the field of nutrition education and diet modification, taking the personality traits of athletes into account. By understanding how personality traits are related to nutrition, we can better identify individuals who may be at a higher risk of poor health outcomes and consequently develop targeted interventions to promote healthy eating. As we learn more about the interplay between personality and nutrition, we may be able to develop more personalised approaches to nutrition. For example, individuals who demonstrate a high level of neuroticism may benefit more from different types of dietary interventions than individuals who exhibit high extraversion. By tailoring our recommendations to an individual’s personality, we may be able to achieve better outcomes.
The limitations of this work are primarily related to the failure to include demographic and sports variables (see training, competition practice, and discipline), as well as one selected nutritional area (peri-exercise nutrition behaviours) and the self-descriptive nature of the applied research tools. The limitations of the study also concern the failure to consider the training loads that determine the nutritional needs of athletes. The limitations indicated as well as others may set the directions for further research, the aim and subject of which should be to achieve a comprehensive assessment of personality determinants in various areas of sports nutrition, taking gender, sports experience, sports level, and type of discipline into account. Further research could concern personality determinants regarding the quantitative aspects of athletes’ diets (e.g., energy consumption, macronutrients, vitamins, and mineral salts), which would contribute to a comprehensive assessment of athletes’ nutrition. |
# The Mediating Effect of Central Obesity on the Association between Dietary Quality, Dietary Inflammation Level and Low-Grade Inflammation-Related Serum Inflammatory Markers in Adults
## Abstract
To date, few studies have explored the role of central obesity on the association between diet quality, measured by the health eating index (HEI), inflammatory eating index (DII), and low-grade inflammation-related serum inflammatory markers. In this paper, we use the data from the 2015–2018 National Health and Nutrition Examination Survey (NHANES) to explore this. Dietary intakes were measured during two 24-h dietary recall interviews and using USDA Food Pattern Equivalence Database (FPED) dietary data. Serum inflammatory markers were obtained from NHANES Laboratory Data. Generalized structural equation models (GSEMs) were used to explore the mediating relationship. Central obesity plays a significant mediating role in the association between HEI-2015 and high-sensitivity C-reactive protein (hs-CRP), mediating $26.87\%$ of the associations between the two; it also mediates $15.24\%$ of the associations between DII and hs-CRP. Central obesity plays a mediating role in $13.98\%$ of the associations between HEI-2015 and white blood cells (WBC); it also mediates $10.83\%$ of the associations between DII and WBC. Our study suggests that central obesity plays a mediating role in the association of dietary quality with low-grade inflammation-related serum inflammatory markers (hs-CRP and WBC).
## 1. Introduction
Growing evidence shows that low levels of chronic systemic inflammation are associated with a number of chronic diseases, including cardiovascular disease, cancer, chronic kidney disease and neurodevelopmental disorders [1,2,3]. Dietary nutrition is a key variable affecting chronic inflammation, mainly because daily food intake is a good indicator of inflammatory potential [4].
Recent studies have linked different types of food to chronic inflammation. A previous study has shown that when added sugars are consumed, fat cells release pro-inflammatory cytokines that trigger inflammation [5]. Recent studies have also found an inverse association between increased vegetable and fruit intake and serum CRP levels [6,7]. The results of a cross-sectional study in India showed that a $1\%$ reduction in dietary saturated fatty acid (SFA) intake was associated with a 0.14 g/L reduction in plasma hs-CRP, after adjusting for relevant variables [8]. Although the relationship between individual foods or individual nutrients and chronic inflammation has been discussed, in recent years, there has been a growing recognition that different combinations of food components may interact in complex ways that are better explained by dietary patterns.
Dietary intake can modulate cancer and is a promising means of reducing the risk of chronic diseases and metabolic dysfunction [9,10,11,12]. Healthy eating index (HEI) score is a measure of dietary quality, which represents the degree to which the Dietary Guidelines for Americans (DGA) are followed [13,14]. Dietary inflammatory index (DII) score is a measure of dietary inflammatory potential based on the overall inflammatory characteristics of dietary components [15,16]. Several studies [9,12,16] have shown that both HEI and DII are associated with inflammatory markers.
Obesity is described as a chronic low-grade inflammatory state, and visceral fat is known to secrete a number of inflammatory markers [17,18]. The increased secretion of adipokine in people with obesity may lead to chronic low-grade inflammation and oxidative stress, which may induce the development of chronic diseases [19]. Studies [20,21,22,23,24] have shown that DII and HEI score were associated with central obesity. However, the underlying mechanism between these diet scores and chronic systemic inflammation remains unclear. Therefore, it is necessary to explore the pathways and intrinsic associations between dietary scores and inflammation. Thus, changes of body weight under different dietary conditions may lead to changes in inflammatory markers in the body.
Taken together, the above evidence suggests that central obesity may be a causal chain between dietary scores and chronic inflammation. However, to the best of our knowledge, no study to date has investigated whether central obesity mediates the relationship between diet score and inflammatory markers. Therefore, the aim of this study was to explore the relationship between DII, HEI and the level of inflammatory markers, and to further explore whether this relationship is mediated by obesity.
## 2.1. Data Source and Study Sample
The data of this study were obtained from National Health and Nutrition Examination Survey (NHANE) multi-stage large sample database. NHANES is a cross-sectional study conducted by the National Center for Health Statistics (NCHS) and the Centers for Disease Control and Prevention (CDC) that provides data from a nationally representative survey of the health and nutrition status of the non-institutional United States (U.S.) population. It follows complex multi-stage sampling design, investigation including face to face interviews at home (population, social economy, diet, and health related issues), in the center of the flow check health checks (medical and physiological measurement) and laboratory test (exposure biomarkers and end). One cycle in NHANES includes data collected by two years.
Data from NHANES, from the years 2015–2016 and 2017–2018, were selected for this study, which included a total of 19,225 participants. 7377 participants were under the age of 18 years old and 1314 lacked data for BMI and waist circumference. 731 participants lacked the data for high-sensitivity C-reactive protein (hs-CRP), white blood cells (WBC), and neutrophil to lymphocyte ratio (NLR). 1095 participants with abnormal values that could not reflect the state of low-grade inflammation well (hs-CRP ≥ 10 mg/L [25] and WBC >11 × 109 cells/L [26]) were excluded. Thus, a total of 8157 participants were enrolled in our study (Figure 1).
## 2.2. Central Obesity
Central obesity was defined by waist circumference, which was defined as waist circumference ≥ 102 cm in men and ≥ 88 cm in women. Although BMI is a common indicator of obesity in general, it does not reflect differences in the distribution of body fat between individuals [27].
## 2.3. Dietary Score
In this study, two different types of dietary scores were selected. We used the healthy eating index (HEI) score to represent dietary quality in participants. The dietary inflammatory index (DII) score was used to represent inflammatory dietary index.
## 2.3.1. Healthy Eating Index (HEI)
We use the HEI score that was designed and recommended by the United States Department of Agriculture (USDA) to measure an individual’s adherence to the dietary guidelines for Americans (DGA) [28]. The HEI-2015 score is the latest version diet index based on the HEI. The maximum score of HEI-2015 is 100. This index consists of 13 components, which can be divided into 9 adequacy components (total vegetables, greens and beans, total fruits, whole fruits, whole grains, dairy, total protein foods, seafood, plant proteins, and fatty acids) and 4 moderation components (sodium, refined grains, saturated fats, and added sugars). The more adequacy components are consumed, the higher the score, while the fewer moderation components are consumed, the higher the score.
NHANES individual food questionnaire data and Food Patterns Equivalents Database (FPED) dietary data were used to estimate food supply to determine the HEI-2015 score. Each food was classified according to the USDA food code. Finally, the recommended SAS code was used to calculate the HEI-2015 score [29].
## 2.3.2. Dietary Inflammatory Index (DII)
DII score is an indicator of the inflammatory potential of foods and can be used in all populations where dietary data can be collected. DII calculations involved 45 dietary parameters, including a variety of macro and micronutrients, flavonoids, flavorings, and other bioactive compounds, each of which correlated with inflammatory effect scores. Then, DII scores were calculated as a standardization of the world database, which contains the mean and standard deviation of food intake parameters from 11 countries around the world. In NHANES, 45 dietary parameters and 28 inflammatory parameters (carbohydrate, protein, cholesterol, iron, zinc, magnesium, selenium, fiber, fat, monounsaturated fatty acids, caffeine, n-3 polyunsaturated fatty acid, n-6 polyunsaturated fatty acids, polyunsaturated fatty acids, saturated fatty acid, alcohol, vitamin A, vitamin B1, vitamin B2, vitamin B6, vitamin B12, vitamin B6, vitamin B12, beta-carotene, vitamin C, vitamin D, vitamin E, folic acid, energy) can be used for DII score calculation. Previous studies have shown no change in DII’s ability to predict inflammation when the available food parameters are reduced, compared with a complete study with 45 parameters [15,30]. DII scores > 0 indicates that the individual’s diet has a pro-inflammatory effect; DII scores < 0 indicates that the diet has anti-inflammatory effects. The specific DII calculation process is shown in Figure 2.
## 2.4. Serum Inflammatory Marker
Serum inflammatory markers were obtained from NHANES laboratory data. Three different inflammatory markers were selected, including hs-CRP, WBC, and NLR.
CRP is a sensitive marker of systemic inflammation, tissue damage and infection in clinical practice [31]. It is one of the sensitive but non-specific inflammatory indicators. Compared with simply calculated CRP, hs-CRP can reflect the current level of cardiovascular disease risk in individuals without inflammatory conditions. In NHANES, CRP was quantified by latex-enhanced turbidimetry. Because laboratories, instruments, and methods varied between the two periods we explored, weighted Deming regression provided by NHANES was used to compare the two [32]. The equation is as follows: Forward (applicable to DxC 660i values ≤ 23 mg/L): Y (Cobas 6000) = 0.8695 ($95\%$ CI: 0.8419 to 0.8971) ∗ X (DxC 660i) + 0.2954 ($95\%$ CI: 0.2786 to 0.3121) The NHANES performed complete blood cell counts (CBC) in duplicate for all study participants over one year of age. Blood samples were obtained by venipuncture into EDTA tubes and analyzed on a Coulter®DXH800 analyzer. Counts of WBC and their subtypes were obtained using a UNICEL DXH800 analyzer.
## 2.5. Sensitive Analysis
To make our results more representative, we performed a sensitivity analysis. BMI is the simplest and broadest anthropometric measure of general obesity [BMI = weight (kg)/height (m)2]. According to World Health Organization standards, general obesity is defined as BMI ≥ 30 kg/m2. Individuals with general obesity were included in the study as a sensitive analysis.
## 2.6. Covariates
Recent epidemiological studies have shown that dietary over-nutrition and institutionally driven declines in physical activity may be important factors influencing obesity [33,34]. Obesity is associated with an increased risk of diabetes, according to a meta-analysis involving 18 prospective studies [35]. Previous prospective studies have found that the development of high blood pressure is proportional to the level of obesity [36]. A cross-sectional study of 499,504 adults found that smoking was associated with an increased risk of obesity [37]. In addition, since dietary patterns and the state of obesity may vary by race and generation, we further adjusted for relevant demographic variables.
Trained NHANES investigators obtained demographic information from participants living in sample areas. To control for the effect of potential confounders, the following covariates were included: age (18–39, 40–59, >60), sex (men, women), race/ethnicity (Mexican American, Other Hispanic, non-Hispanic White, non-Hispanic Black, and Other Race), education of household referent (less than high school, high school, more than high school), ratio of family income to poverty, marital status (married/living with partner, widowed/divorced/separated/never married), work activity (vigorous activity, moderate activity, and other), recreational activity (vigorous activity, moderate activity, and other), smoking (never smoker, former smoker: lifetime intake of more than 100 cigarettes but current serum cotinine does not reach the threshold, current smoker: lifetime intake of more than 100 cigarettes and current serum cotinine reach the threshold), diabetes (self-reported whether they have ever been diagnosed with diabetes by a doctor), and hypertension (yes: systolic blood pressure ≥ 130 or diastolic blood pressure ≥ 80, or no). The threshold for serum cotinine, which is used to distinguish former smokers and current smokers, were for non-Hispanic white > 4.85 ng/mL, non-Hispanic Black > 5.92 ng/mL, Mexican American > 0.84 ng/mL and other > 3.08 ng/mL [38].
## 2.7. Statistical Analysis
The analysis was performed using Stata version 12.0 (Stata Corporation, College Station, TX, USA) and SAS version 9.4 (SAS Institute, Inc., Cary, NC, USA). Due to the complex sampling design, all analyses were adjusted for survey design and weight variables. Since this study combined NHANES data from 2 periods, a new sample weight (the original 2-year sample weight divided by 2) was constructed according to the NHANES analysis guidelines before analysis. Classified variables were described by percentage, and the basic characteristics of continuous variables were described by mean and standard deviation. Student’s t-test and rank sum test are used to analyze differences between continuous data, and chi-square test is used to analyze differences between classified data. The normality of each clinical biomarker was assessed based on visual inspection of the normogram and assessment of skewness and kurtosis measurements. If the results were not normal, they were naturally log-transformed. Residuals of the predicted values were plotted and assessed for normality. To better fit the model, both hs-CRP and NLR were naturally log-transformed.
All statistical analyses were based on the survey design and weighted variables adjusted to account for the complex sample design and to ensure nationally representative estimates. Multiple linear regression and multiple logistic regression analyses were used to explore the association between diet scores and obesity according to the type of study data. *Weighted* generalized structural equation models (GSEMs) were used to explore the mediating effect of central obesity on diet score and low-grade inflammation. We performed a sensitivity analysis on people with general obesity. The mediation model is constructed and analyzed by a causal diagram (Figure 3). All p values reported were two-sided; $p \leq 0.05$ was statistically significant.
## 3. Results
Table 1 shows the baseline characteristics of participants in terms of obesity and central obesity. A total of 8157 members were included in our study. The proportion of central obesity was $56.1\%$. Central obesity was found in $40.4\%$ of men and 59.6 of women, and women were more likely to be more obese than men. There were statistically significant differences between people with central obesity or without central obesity in gender, age, race, education level, PIR, smoking, diabetes, work activities, recreational activities, and hypertension.
Table 2 shows the results of studies on the relationship between dietary score, general obesity, and central obesity on inflammatory markers. A multiple linear regression model was used to conduct statistical analysis on the results. The results in all adjusted models show that increased DII score was associated with the risk of hs-CRP and WBC (βhs-CRP = 0.046, $95\%$CI: 0.025, 0.068; βWBC = 0.058, $95\%$CI: 0.012, 0.103). Increased HEI score was associated with reduced risk of hs-CRP and WBC (βhs-CRP = −0.006, $95\%$CI: −0.009, −0.004; βWBC = −0.010, $95\%$CI: −0.015, −0.005). Increased risk of general obesity was associated with the increased level of hs-CRP and WBC (βhs-CRP = 0.650, $95\%$CI: 0.591, 0.709; βWBC = 0.502, $95\%$CI: 0.377, 0.627). The increased risk of central obesity was associated with the increased level of hs-CRP and WBC (βhs-CRP = 0.661, $95\%$CI: 0.592, 0.730; βWBC = 0.582, $95\%$CI: 0.440, 0.724). The association between NLR and both types of obesity and dietary score were not statistically significant ($p \leq 0.05$).
Table 3 shows the correlation between two different dietary scores and two types of obesity. Multiple logistic regression models were used to analyze the data. The results all adjusted model shows that increased HEI score was adversely associated with the risk of both types of obesity (ORgeneral obesity = 0.980, $95\%$CI: 0.975, 0.985; ORcentral obesity = 0.987, $95\%$ CI: 0.980, 0.994). Increased DII scores was associated with the risk of both two types of obesity (ORgeneral obesity = 1.066, $95\%$CI: 1.008, 1.127; ORcentral obesity = 1.055, $95\%$ CI: 1.002, 1.110). The association between changes in NLR and two types of obesity was not statistically significant ($p \leq 0.05$).
Table 4 shows the mediating effect analysis of central obesity on the relationship between dietary pattern and inflammatory markers in serum. The results showed that the mediating effect regression coefficient of central obesity on the relationship between DII score, HEI score and hs-CRP was statistically significant (βDII = 0.007, $95\%$CI: 0.001, 0.014; βHEI = −0.002, $95\%$CI: −0.003,−0.001), accounting for $15.24\%$ and $26.87\%$ of the total effect, respectively. The mediating effect of central obesity influence on the relationship between DII score, HEI score, and WBC was statistically significant (βDII = 0.006, $95\%$CI: 0.000009, 0.012; βHEI = −0.001, $95\%$CI: −0.002,−0.0005), accounting for $10.83\%$ and $13.98\%$ of the total effect, respectively.
Table 5 shows the mediating effect analysis of general obesity on the relationship between dietary pattern and inflammatory markers in serum, which was analyzed by using generalized structural equations. The sensitive analysis shows that the mediating effect still significant ($p \leq 0.05$).
## 4. Discussion
In this study, data from two periods of NHANES, 2015–2016 and 2017–2018, were used to analyze the mediating effect of central obesity on the relationship between dietary scores and low-grade inflammation-related serum inflammatory markers. We found that both DII and HEI-2015 dietary scores were associated with central obesity in US adults. In addition, central obesity partially mediated the association between dietary score and low-grade inflammation-related serum inflammatory markers. In the association of DII with hs-CRP and WBC, central obesity mediated $15.24\%$ and $10.83\%$, respectively. Among the associations of HEI-2015 with hs-CRP and WBC, central obesity mediated $26.87\%$ and $13.98\%$, respectively.
The effect of dietary quality and dietary inflammation index may be mediated partly by central obesity. We found that HEI-2015 score was negatively correlated with hs-CRP and WBC, and DII score was positively correlated with these two serum inflammatory markers. Hs-CRP and WBC were positively associated with obesity and central obesity, respectively. A prospective study found that dietary patterns are associated with pro-inflammatory and anti-inflammatory characteristics of gut microbial bacteria [39]. A British twin cohort showed that dietary quality was associated with methylation of 24 CpG sites, several of which were associated with adiposity, inflammation, and glucose abnormalities [40]. Alterations in HEI are associated with altered expression of genes that are markers of inflammation [41,42], and the effects of diet in regulating inflammation are thought to be due to complex interactions between food and biologically active nutrients [12]. A cross-sectional study of 20,823 adults at Moli-Sani constructed an INFLA composite score (CRP, white blood cell count, and NLR) that was positively associated with DII score [43].
Therefore, positive and effective diet for individuals with obesity may help to better maintain state of inflammation, and consequently avoid the occurrence and development of other complications. Dietary recommendations from the DGA can help people reduce their risk of obesity and low-grade inflammation. Several studies [44,45] have shown that a diet high in vegetables and fruits is inversely associated with inflammatory markers, while a diet high in meat, low in vegetables and omega-3 fatty acids, and high in refined carbohydrates, added sugars, saturated and trans fatty acids tends to be positively associated with inflammatory markers. The most obvious change from the 2015–2020 version of the DGA is the explicit limit on added sugars. Intake of added sugars, such as sucrose and high fructose corn syrup, has increased over the past hundred years and is strongly associated with increases in obesity, metabolic syndrome, and diabetes [46].
We found that HEI-2015 was significantly negatively associated with obesity. DII was significantly correlated with of obesity, and two different types of obesity were significantly correlated with hs-CRP and WBC levels. Obesity is a chronic, low-grade inflammatory state, and there may be several reasons why obesity leads to increased levels of inflammatory markers. Fat cells enhance insulin resistance and metabolic disorders, thereby promoting inflammation by increasing levels of CRP and other inflammatory markers [47]. Macrophages are reported to be the source of adipose tissue-derived proteins [48]. In individuals with abdominal obesity, an increase in the number of macrophages infiltrating visceral adipose tissue suggests that adipose tissue itself is a source and site of inflammation [49].
Based on this evidence, we found that dietary patterns and diet quality may influence low-level inflammation through obesity. Therefore, dietary intervention for obese individuals to improve their dietary quality can effectively avoid the risk of inflammation and prevent its complications. Of course, central obesity may not be the only mediator between diet quality and low-grade inflammation. Other factors, such as hypertension and diabetes, have also been strongly linked to diet quality and inflammation, and these need to be further verified in subsequent studies.
Our study has several advantages. First of all, the sample size we selected was large enough, and because the coverage of NHANES was very wide, the sample representation was very good. Second, we weighted the data throughout the study, helping to extrapolate our results to the entire U.S. population. Third, we conducted correlation analysis before mediation analysis to improve the credibility of the results.
At the same time, the disadvantages of the research should not be ignored. First of all, this is a cross-sectional study that cannot determine the causal relationship, and further prospective studies are needed to explore it. Second, the absence of data on the study population may lead to selection bias and affect the results of the whole study. Third, we use a limited set to work with in NHANES; thus, we could not further adjust for variables such as genetic factors or microbial factors that also have an impact. Finally, we used dietary review data and limited food groups to construct dietary scores, particularly DII, which may have influenced our results.
## 5. Conclusions
In conclusion, the results of this study suggest that better dietary quality can influence the state of central obesity, which can reduce the level of inflammation. It should be noted that dietary quality and dietary inflammatory factors may have important implications for the prevention of the level of inflammation, which should be further explored in prospective studies. |
# The Effect of a 12-Week Physical Functional Training-Based Physical Education Intervention on Students’ Physical Fitness—A Quasi-Experimental Study
## Abstract
Children have received much attention in recent years, as many studies have shown that their physical fitness level is on the decline. Physical education, as a compulsory curriculum, can play a monumental role in contributing to students’ participation in physical activities and the enhancement of their physical fitness. The aim of this study is to examine the effects of a 12-week physical functional training intervention program on students’ physical fitness. A total of 180 primary school students (7–12 years) were invited to participate in this study, 90 of whom participated in physical education classes that included 10 min of physical functional training, and the remaining 90 were in a control group that participated in traditional physical education classes. After 12 weeks, the 50-m sprint ($F = 18.05$, $p \leq 0.001$, ηp2 = 0.09), timed rope skipping ($F = 27.87$, $p \leq 0.001$, ηp2 = 0.14), agility T-test ($F = 26.01$, $p \leq 0.001$, ηp2 = 0.13), and standing long jump ($F = 16.43$, $p \leq 0.001$, ηp2 = 0.08) were all improved, but not the sit-and-reach ($F = 0.70$, $$p \leq 0.405$$). The results showed that physical education incorporating physical functional training can effectively promote some parameters of students’ physical fitness, while at the same time providing a new and alternative idea for improving students’ physical fitness in physical education.
## 1. Introduction
The essential foundation for a person to achieve good health is established during childhood, and this groundwork will subsequently determine health in adulthood [1,2]. A powerful marker of health in children is physical fitness (PF) [3], and this indicator appears to be growing in significance in their everyday lives [4]. Not only has PF been reported to be essential for performing school activities and meeting home responsibilities, but it has also been proclaimed to provide adequate energy for sports and alternative leisure activities [5]. There is evidence that children with low PF levels are associated with negatively impacting health outcomes, such as obesity, heart disease, impaired skeletal health, and poor quality of life [6].
Physical education (PE) is regarded as an ideal intervention point for promoting students’ health and PF because it involved almost all children [7]. Dobbins et al. [ 8] noted that PE-based interventions could ensure $100\%$ of students were exposed to the intervention, which could benefit a large number of children across a wide range of demographic groups. Additionally, Errisuriz et al. [ 9] suggested that even minor PE modifications could improve fitness, and that the key was to discover a PE-based intervention that could be executed successfully. However, some studies have highlighted that there were barriers that prevented PE from playing a vital role in promoting students’ physical health and fitness, such as the scope, quantity, and quality of PE classes [10,11,12,13,14]. Ji and Li [15] also pointed out that PE in China had become a “safety class”, “discipline class”, and a “military class” which overemphasized the uniformity of movements and in which students often did not even sweat throughout the duration of the class. Such PE would not benefit public health and could make the students’ physique even worse [16]. In response to the shortcomings of traditional PE classes in China, a physical education and health curriculum model was proposed in 2015, which emphasized that each PE class must include 10 min of fitness training using diversified, enjoyable, and compensatory methods and means [17]. This model was mainly aimed at traditional PE classes that did not have a specific time allocated for PF exercises [16]. Many types of training methods have been suggested to improve PF such as school-based, high-intensity interval training [18], integrated neuromuscular exercise [19], game-based training [20], and sports training [21]. Moreover, functional training (FT) had also been advocated as a method to improve PF [22,23,24,25]. FT is a training concept and method system that focused on the basic posture and movement patterns, integrated various qualities to optimize the most basic movement abilities of the human body, and systematically optimized the links such as movement pattern, spinal strength, kinetic chain, recovery, and regeneration, to improve athletic ability [26].
FT is a relatively novel form of fitness [27], which originated in sports medicine, then was used in the coaching of sports, and was finally adopted in gymnasiums [28]. Nowadays, FT has become a fitness hot topic, ranking among the top 20 worldwide fitness trends based on the American Society of Sports Medicine (ACSM) global fitness trend survey since 2007 [29,30,31,32,33,34,35,36,37,38,39,40,41,42,43]. One of the reasons for its popularity is due to its health benefits; FT was designed to enhance the ability of exercisers to meet the demands of performing a wide range of activities of daily living at home, work, or play without undue risk of injury or fatigue [44]. Another reason was related to the performance benefit, as Boyle [45] noted that FT could help train speed, strength, and power for improved performance. Furthermore, FT required little space, little equipment, and little time, adding to its popularity [46]. In 2011, China introduced FT when preparing for the London Olympics [47]. To highlight the importance of FT in sports and distinguish it from medical institutions’ FT, the word “physical” was added before “functional training”, and physical functional training (PFT) became a widely used term to replace FT in China [48]. The PFT included pillar preparation, movement preparation, plyometrics, movement skills, strength and power, energy system development, and regeneration and recovery [26]. PFT had the characteristic of “separation and combination” in the application, so each PFT section could be designed and arranged flexibly, based on different stages of training and tasks, as needed [49].
With the deepening research on PFT in sports [25,50], more researchers began to transplant PFT to school PE. Through a systematic review of the research on PFT from 2009 to 2019, Kang, et al. [ 51] pointed out that researchers focused on PFT theoretical research from 2009–2012, applied research integrating PFT with PE from 2012–2014, and after 2014—with the enrichment and depth of PFT research topics—researchers focused on the application of PFT in school PE to improve students’ PF. However, these studies mainly involved teenagers and college students, with less attention on children [24,51]. Therefore, this research aimed to integrate PFT, an innovative PF training method, into PE and evaluate the impact of a 12-week PFT-based PE intervention on primary school students’ PF. The PFT intervention was designed to take up only 10 min of a regular PE lesson. It was hypothesized that the PF of the participants who underwent the PFT intervention would be improved after 12 weeks. Additionally, it was also hypothesized that the PF performance of the participants of the PFT group would be better than the participants of the control group at the end of the 12-week program.
## 2.1. Study Design
This study used a 12-week quasi-experimental design in which groups of participants were assigned to an intervention or control condition in a primary school in China. The intervention group participated in a 10-min PFT intervention program which was included in the PE class. The control group remained in the traditional PE class without the PFT intervention.
## 2.2. Participants
According to the PE and Health Curriculum Standards for Compulsory Education (2011 Edition) [52], the learning levels of primary school students were divided into three levels based on the characteristics of students’ psychosomatic development, which were first and second grades as level one, third and fourth grades as level two, and fifth and sixth grades as level three. Consequently, in this study, students from second grade, third grade, and sixth grade were selected to represent students from all three levels. Two classes from the selected grades were randomly chosen as the experimental class (EC) and control class (CC), respectively, with 30 students in each class. A total of 180 male and female students between the ages of 7 and 12 (8.97 ± 1.84 years) participated in the study.
All students read the participant information form, and their parents or guardian signed the informed consent form. This study was conducted according to the procedures approved by the University of Malaya Research Ethics Committee (UM.TNC2/UMREC—667, 19 November 2019).
## 2.3. Measurements
To evaluate the impact of PFT on students of different grades and levels, this study selected the mandatory PF indicators for all students based on the 2014 revised Chinese National Student Physical Fitness Standard (CNSPFS) battery [53] and were as follows: height and weight, 50-m sprint, sit-and-reach, and timed rope skipping. At the same time, two additional indicators of agility T-test and standing long jump were selected to evaluate agility and power, according to the PF test guidelines [54]. All measurements were taken before and after the 12-week intervention, in the same order.
## 2.3.1. Height and Weight Test
Participants’ height and weight were measured by using a portable instrument (GMCS-IV; Jianmin, Beijing, China) to reflect their anthropometric characteristics. Testing was performed with the subject standing on the bottom plate of the equipment barefoot, with the head upright, the torso naturally straight, the upper limbs naturally drooping, and the heels close together. The toes were 60 degrees apart, and two to three seconds later, the measurement result appeared on the LCD [55]. The unit of measurement for height was in meters (m) and weight in kilograms (kg).
## 2.3.2. 50-m Sprint Test
The 50-m straight racetrack, a starting flag, a whistle, and a stopwatch were used in this test, which was employed to assess speed. Before the test, the participants were in a ready position, standing with one foot in front of the other and the front foot behind the starting line. After the participants were prepared, the starter gave the instructions “set” then blew a whistle and waved the starting flag. The participants ran to the finish line as fast as possible while the finish line timer started timing, and the timekeeper stopped timing at the same time when the participant ran across the finish line. Each participant was allowed two trials. The best time was taken and recorded in seconds (s) to two decimal places.
## 2.3.3. Sit-and-Reach Test
The sit-and-reach test was carried out by a seat-forward flexion tester (GMCS-IV; Jianmin, Beijing, China) to assess flexibility. During the test, the participant sat on a flat surface with legs straight and flat against the test longitudinal plate, approximately 10~15 cm apart. The upper body was bent forward, with the palms down and hands side by side, reaching forward along the measuring line as far as possible. Participants took the test twice, and the best result was recorded in centimeters (cm) to one decimal point.
## 2.3.4. Timed Rope Skipping Test
The rope-skipping test was conducted by using a rope and a stopwatch to assess strength, muscle endurance, and coordination. During the test, participants were required to skip continuously for one minute with their feet together. The tester timed, counted, and recorded the number of times the rope was skipped.
## 2.3.5. Agility T-test
A stopwatch, measuring tape, and four cones were used in this test to assess agility. Figure 1 shows the layout for the agility T-test. The participant began at cone 1, the same starting position for each trial. On the go command, the participant ran and touched cone 2, then cone 3. After touching cone 3, the participant shuffled sideways and touched cone 4. Next, the student shuffled back, touched cone 2, then ran back to the end line. Timing started on the command and stopped as the participant passed the end line. Each participant had two chances to take the best score in seconds (s).
## 2.3.6. Standing Long Jump Test
The test was conducted by using a tape measure to assess power. During the test, with feet slightly apart, the participant stood behind a line drawn on the ground. A two-foot takeoff and landing were used, with forwarding force provided by swinging the arms and bending the knees. The participant attempted to jump as far as possible, landing on both feet without falling back. The test outcome was measured from the start line to the closest point of contact (back of the heel) after landing. Two jumps were allowed, and the best was taken in cm.
## 2.4. Intervention Program
The program included three stages, starting with two weeks of the basic stage, which was mainly used to learn the basic movement pattern, then moving on to five weeks of advanced stage Ⅰ, and another five weeks of advanced stage Ⅱ.
The basic stage focused on teaching the basic movement patterns to develop PF based on mastering basic movement patterns. Advanced stage Ⅰ comprised of PFT modules using the medicine ball, agility ladder, pad, or cone to develop the participants’ PF. Advanced stage Ⅱ was mainly based on the same PFT modules of stage Ⅰ but with an increase in the training load. In terms of arranging the training load, it was generally to overcome the self-weight and light load. The change of load from advanced stage Ⅰ to advanced stage Ⅱ was realized through the following forms: [1] the change of training route, from unidirectional to multidirectional change, and [2] the distance and repeat times. The exercise components and a detailed arrangement of the intervention are presented in Table 1.
## 2.5. Procedures
First, in this study, a team of research assistants comprised of three primary school teachers from the experimental school was trained in data collection and intervention implementation.
Then, the teachers organized the participants to perform the height and weight test, followed by the 50-m sprint, sit-and-reach, timed rope skipping, agility T-test, and standing long jump test for the baseline assessment. During the tests, PE teachers first put forward some safety considerations to the participants. After the introduction, they used 10 min to organize the students to warm up, including jogging and muscle stretching before taking the baseline tests. In the testing process, each student had two opportunities for each test, and the best score was recorded.
Next, participants were required to attend three PE sessions per week for 12 weeks. The EC took part in the PE class that was incorporated with 10 min PFT program while the CC participated in the traditional PE classes that had no mandatory requirements for PF training [16] and were mostly comprised of games activities (see Table 1 for example of games activities).
Finally, all participants were tested again by using the same format as the baseline.
## 2.6. Statistical Analysis
SPSS 25.0 software (IBM SPSS Statistics for Windows, Version 25.0. IBM Corp.: Armonk, NY, USA) was used to process and analyze the PF test results of children. The normality distribution of data was checked by using the Shapiro–Wilk test for all measurements. Based on the distribution results, the independent sample T-test (parametric) or the Mann–Whitney U test (nonparametric) was used to compare the test scores between the EC and CC prior to the start of the experiment. The paired sample T-test (parametric) or Wilcoxon signed-rank test (nonparametric) was used to compare the score changes between baseline and posttest, for the EC and CC, respectively. Cohen’s d was used to describe effect sizes for the parametric test according to the following conventions: small (0.20 to 0.49), medium (0.50 to 0.79), and large (0.80 and above) (Cohen, 1988). Pearson’s r was used to describe effect sizes for the nonparametric tests according to the following conventions: small (0.10 to 0.29), medium (0.30 to 0.49), and large (0.50 and over) [56,57].
Analysis of covariance (ANCOVA) was conducted to determine significant differences between the posttest scores of EC and CC. Height, weight, and baseline scores of each measurement variable were entered as covariates. Quade’s rank-transformed analysis of covariance (nonparametric ANCOVA) as an alternative method was used if the data did not meet the assumption for ANCOVA [58,59]. Effect sizes for statistically significant outcomes were reported as partial eta squared (ηP2), with small, medium, and large effect sizes classed as 0.01, 0.06, and 0.14, respectively [56].
## 3.1. Comparison of Baseline Characteristics
An overview of the anthropometric characteristics of participants is shown in Table 2. There were no significant differences between the EC and CC at baseline for all measures in all grades ($p \leq 0.05$) (Table 3).
## 3.2. Effect of Intervention
After 12 weeks of PE classes, within-group comparisons were made between participants in both the experimental and control classes at each grade level (see Table 4). For the second grade, the EC showed significant improvement in the 50-m sprint, timed jump rope, agility T-test, and standing long jump after the experiment ($p \leq 0.001$), whereas scores for the sit-and-reach ($$p \leq 0.187$$) were not significant. The CC showed significant improvements in the 50-m sprint, timed rope skipping, and standing long jump after the experiment ($p \leq 0.001$), whereas the scores for the sit-and-reach ($$p \leq 0.073$$) and agility T-test ($$p \leq 0.670$$) were not significant. In third grade, there was a significant increase in all indicators in both the EC and CC ($p \leq 0.001$). In the sixth grade, there was a significant increase in the posttest values compared to the baseline of all indicators in the EC ($p \leq 0.001$), whereas in the CC, there was a nonsignificant increase in the timed rope skipping ($$p \leq 0.483$$) and standing long jump ($$p \leq 0.171$$) and a significant increase in the other indicators ($p \leq 0.05$). Although the results varied by grade level, overall, participants in both EC and CC made significant improvements in PF scores after 12 weeks of PE classes ($p \leq 0.05$).
The results of the comparison between the experimental and control class groups are shown in Table 5. Overall, the differences in the postintervention indicators between the students in EC and CC were highly significant, except for the sit-and-reach ($$p \leq 0.405$$). The specific results for each grade were as follows. In the second grade, EC was significantly better than CC in the 50-m sprint, timed rope skipping, and agility T-test, but the differences in sit-and-reach ($$p \leq 0.680$$) and standing long jump ($$p \leq 0.079$$) were not statistically significant. In the third grade, the 50-m sprint, timed rope skipping, and agility T-test scores of EC were significantly better than CC, whereas the differences in sit-and-reach ($$p \leq 0.120$$) and standing long jump ($$p \leq 0.244$$) between the two groups were not statistically significant. In the sixth grade the 50-m sprint, timed rope skipping, and standing long jump scores of EC were significantly better than CC, whereas the differences in sit-and-reach ($$p \leq 0.980$$) and agility T-test ($$p \leq 0.222$$) indicators between the two groups were not statistically significant.
## 4. Discussion
The purpose of this study was to evaluate the impact of a 12-week PFT-based PE intervention on primary school students’ PF. It was hypothesized that the PF of the participants who underwent the PFT intervention would be improved after 12 weeks. In addition, it was hypothesized that the PF of the participants of the PFT group would be better than the participants of the control group at the end of the 12-week intervention.
When the baseline scores were compared with the post-test scores, the results revealed that the PF of the EC students who participated in the PFT intervention had improved after 12 weeks, in line with our hypothesis. At the same time, students of the CC had also significantly improved across time. It appeared that the traditional PE class, which comprised mostly of games activities, was able to improve students’ PF after 12 weeks, regardless of whether there was a 10-min PFT component included in the class or not. This is a positive finding for PE in schools—the current classes were somewhat beneficial to the students. This finding was supported by Cocca, et al. [ 60], Cocca, et al. [ 61], and Petrušič, et al. [ 62] who also found that PE classes, including games, could improve the PF of students.
When baseline data were entered as covariates, the results of this study showed that there was a significant difference in the scores of the EC over the CC in all PF variables except for the sit-and-reach test, which also supported our hypothesis that participants of the EC would display better PF performance compared to the CC. The results of the study suggested that PFT could provide a novel exercise method for PE modules to improve students’ PF.
In this study, the largest differences between the groups were in the 50-m sprint, which evaluated speed, and the timed rope skipping, which assessed muscle strength and coordination. The EC at each level was significantly better than the CC. This was consistent with previous studies showing that PFT could improve muscle strength and speed. Yildiz, Pinar, and Gelen [24] implemented an eight-week FT versus traditional training program in preteen tennis players (9.6 ± 0.7 years) and reported that FT was more effective than traditional training in both strength and speed. Tomljanović, Spasić, Gabrilo, Uljević, and Foretić [27] similarly proved that a five-week functional training program for males aged 23 to 25 could improve speed and strength performance. Limited literature is available to compare the combined effect of PFT on coordination. Nevertheless, Li et al. [ 63] pointed out that PFT emphasized the integration of nerve-muscle functions and strengthens the efficient control of nerves over muscles in multiple dimensions, all-around range, and speed in a wide range, which helped speed, agility, and coordination gain better performance.
Next, the agility T-test showed significant differences between the EC and CC at levels one and two. In the third level, although both the EC and CC improved over time, there was no significant difference between the two classes. The positive changes in measured agility might be related to enhanced lower-extremity reflexes and proprioception and improved postural control in subjects through 12 weeks of training [27]. Meanwhile, the insignificant results of the students in level three might be related to the students’ 50-m × 8 shuttle run practice, which also promoted the development of the CC students’ agility in the corresponding teaching and practice.
In addition, the standing long jump test, which evaluated power, showed no significant differences between the groups in levels one and two but revealed a significant difference in level three. This may be related to the motor coordination ability that affected explosive power [27]. Low-level students are not as good as high-level students in postural control and muscle coordination during movement practice. The stimulation generated during movement practice might not be enough to stimulate the neuromuscular system to burst intensity [64]. Therefore, students in the lower levels could not benefit from PFT until they reached a later age, at which point motor coordination was better developed.
Finally, there was no significant difference between EC and CC in all three levels of the sit-and-reach test for assessing flexibility. According to previous research [23,24,65], PFT interventions could significantly improve the flexibility of participants. It was possible that the inconsistency of the results with other studies could be because the PFT program of this study did not include dynamic stretching and static stretching exercises, which were often arranged in warm-up and cool-down modules of training programs [26,66]. Because this study mainly focused on the main model of the PE class, the stretching module was not included in the PFT program.
In summary, the highlight of this study was that primary school students’ PF, such as speed, coordination, strength, and agility, was superior after 10 min of PFT in each PE class, which was in line with the previous studies that had found that PFT could improve PF [22,45,67,68]. It was possible that PFT emphasized the neural involvement in the training process [63,69,70] to affect the entire neuromuscular system [69,71]. In addition, according to a previous study, PFT also strengthened the body’s stretch reflex, which increased the reflexivity of muscle activity through the rapid pulling of the muscle shuttle to promote muscle force and power output [26]. However, it was also found that students at the lower levels were less effective than those at the higher levels in terms of power generation, which could be related to the quality of the movement performed. PFT focused on the quality of the movement rather than the load and quantity of the movement [26]. Hence, in lower-level students who had weaker limb control, the quality of their movement could have been affected, and consequently their performance was worse than the upper-level students.
There are also limitations in this study. First, the participants were all primary school students, which had a limited cognitive level, the quality of the movement completion was affected to a certain extent in the process from understanding the movement to implementing it. Secondly, in the selection of movements in the program, some simple and easy-to-implement movements were selected, which reduced the intensity of the exercise to a certain extent. Finally, all the students came from one class at each level. This was to accommodate the timetable, as all of the PE lessons were not conducted for all of the students at the same time. As such, we chose the participants from one class in each level based on the available slot given by the PE teacher.
## 5. Conclusions
The study results identified that the 12-week PE effectively improved the PF level of students both in EC and CC. However, the PFT integrated into PE produced more positive effects on students’ PF than traditional PE, such as speed, agility, and coordination, which revealed that PFT could be an acceptable and effective type of exercise for school children to improve their PF. The targeted selection of PFT exercises which were designed to be incorporated into the existing PE modules could be adopted by PE teachers to develop students’ PF. |
# Mental Health Conditions– and Substance Use—Associated Emergency Department Visits during the COVID-19 Pandemic in Nevada, USA
## Abstract
Background—Mental health conditions and substance use are linked. During the COVID-19 pandemic, mental health conditions and substance use increased, while emergency department (ED) visits decreased in the U.S. There is limited information regarding how the pandemic has affected ED visits for patients with mental health conditions and substance use. Objectives—This study examined the changes in ED visits associated with more common and serious mental health conditions (suicidal ideation, suicide attempts, and schizophrenia) and more commonly used substances (opioids, cannabis, alcohol, and cigarettes) in Nevada during the COVID-19 pandemic in 2020 and 2021 compared with the pre-pandemic period. Methods—The Nevada State ED database from 2018 to 2021 was used ($$n = 4$$,185,416 ED visits). The 10th Revision of the International Classification of Diseases identified suicidal ideation, suicide attempts, schizophrenia, and the use of opioids, cannabis, alcohol, and cigarette smoking. Seven multivariable logistic regression models were developed for each of the conditions after adjusting for age, gender, race/ethnicity, and payer source. The reference year was set as 2018. Results—During both of the pandemic years (2020 and 2021), particularly in 2020, the odds of ED visits associated with suicidal ideation, suicide attempts, schizophrenia, cigarette smoking, and alcohol use were all significantly higher than those in 2018. Conclusions—Our findings indicate the impact of the pandemic on mental health- and substance use-associated ED visits and provide empirical evidence for policymakers to direct and develop decisive public health initiatives aimed at addressing mental health and substance use-associated health service utilization, especially during the early stages of large-scale public health emergencies, such as the COVID-19 pandemic.
## 1. Introduction
Mental health and substance use issues are intertwined, and both reportedly increased in the United States and globally during the COVID-19 pandemic [1,2,3,4,5]. Nevada has distinct characteristics in terms of mental health and substance use, with rates that are typically higher than the national average [6]. Nevada ranks 44th out of 51 states in the USA regarding the prevalence of mental illnesses [7]. Nevada is also one of the worst-performing states for access to mental health care services (39 out of 51 states) [8]. The pandemic had a significant economic impact on Nevada, a world tourism center. More than $90\%$ of those employed in the hospitality industry lost their jobs during the lockdown in 2020, raising concerns about the mental health of its residents during the pandemic [9]. The pandemic struck mental health and substance use not only because of the disease itself, but also because of lockdowns, isolation, the economic downturn, and job losses [10]. The COVID-19 pandemic also reduced emergency department (ED) visits in the US, with the lowest-acuity ED visits having the most dramatic reduction, indicating that patients who did not require as much immediate medical attention were more likely to avoid going to EDs during the pandemic [11]; however, some conditions experienced proportionally smaller ED decreases, particularly those involving mental health and substance use [12,13]. The limited access to mental health care in Nevada [8] might push more patients with mental health-related conditions to EDs during crises, while the state also has limited ED resources compared with the national average [14]. Analyzing ED visits that have increased proportionally during the pandemic will prompt policies for caring for vulnerable patients during a public health emergency or crisis in order to prevent them from ending up in the ED, and Nevada statistics can provide insightful information on this matter.
The COVID-19 pandemic had a wave pattern. Depending on the wave and the state governors’ decisions, different policies were in place across the USA during the COVID-19 pandemic in 2020 and 2021 [15]. For example, in Nevada, COVID-19-related regulations were stricter in 2020 than in 2021 [16]. The COVID-19 pandemic literature on ED visits associated with mental health conditions and substance use tended to focus on the period early in the pandemic in 2020 [15,17,18,19,20,21,22,23]. Ridout and colleagues, in a study conducted early in the pandemic in 2020, found that a significant number of youth, especially women with no prior psychiatric history in North California, were admitted to the ED for suicide-associated issues [21]. A cross-sectional study of ED visits on children (5–17 years old) with a primary mental health diagnosis in the *Chicago area* found that visits for suicide or self-injury increased by $6.69\%$ during the pandemic [22]. Venkatesh and colleagues, similar to Pines and colleagues, found that substance use-associated ED visits proportionally increased during the pandemic [15,17]. In a study inclusive of the Southern States, Petal and colleagues found an upsurge in opioid overdoses: $10.3\%$ more opioid-associated deaths occurred from January to October 2020 than in 2019 [24]. Although these studies offer insightful information on the pandemic’s early stages and specific facilities, they have limited potential for generalization to other facilities and the rest of the pandemic period.
Mental health conditions and substance use are linked, and substance use can be regarded as a mental health condition that also affects behavior [25]. For example, the rates of cigarette smoking are approximately two- to four-fold higher in patients with a psychiatric disorder [5]. Some mental health conditions and substance use might be of particular concern due to their high prevalence and/or serious outcomes, as well as their reported increases during the pandemic [26,27,28]. Suicidal ideation, suicide attempts, and schizophrenia are all common and serious conditions that rose during the COVID-19 pandemic [27,28]. Uncontrolled schizophrenia has been associated with an increase in suicide attempts, and suicide is the tenth leading cause of death in the US [29,30]. Opioids, cannabis, alcohol, and cigarettes are commonly used substances in the US [27], and their use reportedly increased during the pandemic [12,31].
By determining the prevalence of certain conditions among ED patients during the pandemic, policymakers would be able to construct vital public health interventions intended to target a subset of the population with a higher possibility of ED visits during a crisis. This higher possibility could be due to the rising prevalence of these conditions in the general population or dwindling non-ED-facility options during lockdowns in the pandemic. Another explanation would be that these subsets, relative to the general population, might exhibit less behavioral caution in terms of ED visits during the pandemic. It is worth mentioning that between 13.7 and $27.1\%$ of all ED visits in the USA could be unnecessary or treated at alternative sites [32]. Prior studies on mental health conditions and substance use associated with ED visits were conducted early in the pandemic, either in general (not on individual conditions) [20] or on just one condition [21]. The aim of this study was to compare potential changes in ED visits associated with common and/or serious mental health conditions (suicidal ideation, suicide attempts, and schizophrenia) and more commonly used substances (opioids, cannabis, alcohol, and cigarettes) during the COVID-19 pandemic in 2020 and 2021 as opposed to the pre-pandemic years. With this approach, this study attempted to examine whether the earlier effect of the pandemic would differ from the later effect on mental health and substance use among ED visits.
## 2.1. Data
The Nevada State Emergency Department Databases (SEDDN) containing all ED visits in 2018 and 2019 (two years before the pandemic), as well as 2020 and 2021 (two years since the pandemic), were used. The SEDDN contains rich information on all non-federal acute community hospitals in Nevada [26]. All ED visits associated with opioids, cannabis, cigarette smoking, alcohol use, suicidal ideation, suicide attempts, and schizophrenia were identified using the International Classification of Diseases, 10th Edition (ICD-10). These codes are listed in Supplemental Table S1 and have been used in prior publications [26,33]. The University of Nevada, Las Vegas, institutional review board deemed this study exempt because the SEDDN database provides administrative data after complete de-identification [26]. For data analysis, a total of 4,185,416 ED visits (2018–2021) were included in this study. The demographics of the study population, as well as the seven variables’ frequencies from 2018 to 2021, are indicated in Table 1.
## 2.2. Measures and Data Analysis
Seven dichotomous dependent variables were studied here as follows: three common and serious mental health conditions, including suicidal ideation, suicide attempts, and schizophrenia, and four commonly used substances, including cigarette smoking, alcohol drinking, opioid use, and cannabis use. Age, gender, race/ethnicity, and payer source have been previously associated with these dependent variables [26] and were, therefore, included as independent variables in the regression model. In order to control for time and detect a potential trend, year was included as a dummy variable in all seven regression analyses, as used by other prior studies [34].
The patients’ age groups (<12, 12–17, 18–24, 25–34, 35–44 (reference), 45–54, 55–64, and ≥65), gender, payer source (Medicare, Medicaid, uninsured, other insurance, and private insurance (reference)), race/ethnicity (Black, Hispanic, Asian/Pacific Islander, White (reference), and others), and time (years 2018 (reference), 2019, 2020, and 2021) were the independent variables in each analysis [26].
Multiple visits from the same patient would be considered distinct ED visits because the data had been deidentified. As a result, the ED visits served as the unit of analysis [26]. To account for variations within hospitals due to the clustering effect, we utilized the generalized linear model for multivariable analysis and treated hospital as the random effect while estimating the fixed effect of the independent variables of individual hospital discharges [26]. All statistical analyses were conducted using SAS software version 9.4 (SAS Institute Inc.; Cary, NC, USA). p-values of <0.05 (2-tailed) were considered statistically significant.
## 3. Results
The numbers of ED visits were 1,107,950 ($26.5\%$), 1,153,000 ($27.5\%$), 924,887 (22.1), and 999,579 ($23.2\%$) from 2018 to 2021, respectively, with a total number of 4,185,416 (Table 1). In all of these four years, more than $50\%$ of ED visits were by women. Medicaid was the most prevalent payer source, covering more than $35\%$ of ED visits. The proportion of White people who visited an ED decreased from $54.0\%$ to $48.8\%$, whereas it increased for Black, Hispanic, and Asian people. Among all ED visits, the percentage of suicidal ideation was 1.69 in 2018, peaked at 1.96 in 2020, and decreased to 1.89 in 2023; the percentage of suicide attempts was 0.11 in 2018, peaked at 0.13 in 2020, and decreased to 0.12 in 2021; the percentage of schizophrenia was 1.09 in 2018, peaked at 1.87 in 2020, and decreased to 1.48 in 2021; the percentage of opioid use was 0.67 in 2018 and it peaked at 0.70 in 2020; the percentage of cannabis use was 1.26 in 2018 and peaked at 1.48 in 2020; the percentage of alcohol drinking was 3.33 in 2018 and peaked at 4.00 in 2020; and the percentage of smoking was 7.43 in 2018 and peaked at 9.67 in 2020 (Table 1). Generally, the rates of these conditions were higher in 2018 than in 2019 (Table 1). Therefore, 2018 was set as the reference year.
Table 2 indicates the factors associated with the mental health conditions of suicidal ideation, suicide attempts, and schizophrenia among ED visits in Nevada from 2018 to 2021. The odds of suicidal ideation-, suicide attempt-, and schizophrenia-associated ED visits were significantly higher during both years of the pandemic (2020 and 2021) compared with 2018. The adjusted odds of suicidal ideation-associated ED visits were $11\%$ ($95\%$ CI = 1.04–1.19) and $9\%$ ($95\%$ CI = 1.02–1.17) higher in 2020 and 2021, respectively, than those in 2018. The odds of suicide attempt-associated ED visits were $20\%$ ($95\%$ CI = 1.09–1.33) and $16\%$ ($95\%$ CI = 1.05–1.27) higher in 2020 and 2021, respectively, than those in 2018. The odds of schizophrenia-associated ED visits were $60\%$ ($95\%$ CI = 1.47–1.75) and $28\%$ ($95\%$ CI= 1.17–1.40) higher in 2020 and 2021, respectively, than those in 2018. The odds of suicidal ideation and schizophrenia were significantly $8\%$ lower ($95\%$ CI = 0.86–0.99) and $23\%$ higher in 2019 ($95\%$ CI = 1.12–1.34) than those in 2018, respectively. Other factors were also related to the odds of these mental health conditions for ED visits. ED visits associated with these three mental health conditions were significantly less likely to be female (suicidal ideation: OR = 0.43, $95\%$ CI = 0.41–0.46; suicide attempts: OR = 0.927, $95\%$ CI = 0.865–0.99; schizophrenia: OR = 0.33, $95\%$ CI = 0.31–0.35). The age group of 12–17 years had significantly higher odds of suicidal ideation- (OR = 1.45, $95\%$ CI = 1.32–1.59) and suicide attempt- (OR = 4.86, $95\%$ CI = 4.32–5.46) associated ED visits compared with the control age group of 35–44 years. However, the control group had higher odds of schizophrenia-associated ED visits compared with the other five age groups (Table 2). Compared with the White race, the three other races (Black, Hispanic, and Asian) had significantly lower odds of suicidal ideation-, suicide attempt-, and schizophrenia-associated ED visits (Table 2), except for schizophrenia-associated ED visits for Black people, who had higher odds of schizophrenia-associated ED visits compared with White people (OR = 1.41, $95\%$ CI = 1.32–1.52). Compared with private health insurance, both Medicaid and Medicare were significantly associated with higher odds of all three types of mental health-associated ED visits (Table 2).
Table 3 indicates the factors associated with the use of opioids, cannabis, alcohol, and cigarette smoking among ED visits in Nevada from 2018 to 2021. Opioid- and cannabis-associated ED visits had significantly lower odds in 2019 and 2021 compared with those in 2018. Cannabis-associated ED visits had significantly $11\%$ higher odds in 2020 compared with 2018 ($95\%$ CI = 1.06–1.16). Cigarette smoking- and alcohol-drinking-associated ED visits had higher odds in 2020 and 2021 compared with 2018, and the highest odds were for smoking-associated ED visits in 2020 (OR = 1.27, $95\%$ CI = 1.22–1.32). Women compared with men had lower odds of all these four conditions-associated ED visits (Table 3). Compared with the 35–44 age group, the 25–34 age group and the six other age groups had significantly higher and lower odds of opioid-associated ED visits, respectively. Cannabis-associated ED visits had higher odds in the 18–24- and 25–34-year age groups compared with the 35–44-year age group. None of the age groups had significantly higher odds of smoking-associated ED visits compared with the 35–44-year age group. Alcohol-drinking-associated ED visits had significantly higher odds in the age groups of 45–54 and 55–64 years compared with the age group of 35–44 years. Compared with the White race, the Black, Hispanic, and Asian races had significantly lower odds of opioid-, smoking-, and drinking-associated ED visits. Regarding cannabis-associated ED visits, only the Black race had significantly higher odds, while the other two races, i.e., Hispanic and Asian, had significantly lower odds compared with the White race. ED visits covered by Medicare and Medicaid, as well as those of the uninsured, had higher odds of opioid, cannabis, cigarette smoking, and alcohol use compared with ED visits covered by private insurance (Table 3).
## 4. Discussion
Here, we examined certain mental health and substance use conditions among ED visits in Nevada between 2018 and 2021 using multivariable analysis. We investigated 2020 and 2021 separately in order to comprehend how the various time points during the pandemic impacted ED visits associated with these conditions. We found that ED visits increased from 2018 to 2019, decreased in 2020, and increased again in 2021, but not to the pre-pandemic level. This trend is also consistent with the national trend in ED visits [35]. Generally, COVID-19-related lockdowns in Nevada were laxer in 2021 than they were in 2020 [16]. Despite the fact that there were more COVID-19 cases in 2021 than in 2020 [16], our findings suggest that the pandemic effects were possibly stronger in 2020 than those in 2021, but there are more details to our findings.
Most previous studies on mental health conditions-associated ED visits during the COVID-19 pandemic were limited to 2020 [20]. A study on one million non-COVID-19 ED visits in Missouri, USA, in 2020 found that the proportion of mental health conditions among all ED visits increased. They did not mention what mental health conditions were included in their study [20]. Using national data, Holland and colleagues found that suicide attempt-associated ED visits increased in 2020 compared with the pre-pandemic period, but they did not study those rates in 2021 [35]. We found that the odds of suicidal ideation-, suicide attempt-, and schizophrenia-associated ED visits significantly increased in 2020 and 2021 compared with 2018, with the highest odds in 2020, indicating a stronger earlier impact of the pandemic than that in 2021. Regarding suicide, *Nevada is* now ranked 12th in the US and is no longer among the top 10 states. With 642 fatalities and a rate of 19.8 suicides per 100,000 people, Nevada came in seventh place in 2019 [36]. Nevada’s Office of Suicide Prevention reports that, while suicide rates nationwide have increased, they have remained stable or even decreased in Nevada [36]. Neither suicidal ideation- nor suicide attempt-associated ED visits significantly increased in 2019 compared with 2018, indicating that their increase during the pandemic may not have been due to their possible gradual increase in Nevada. However, schizophrenia-associated ED visits significantly increased in 2019 compared with 2018. In our study, the odds ratio for schizophrenia in 2020 was the highest (Table 2). More research is needed to determine whether this highest odds ratio is related to the pandemic or its gradual increase. It has been reported that the frequency of schizophrenia among ED visits increased in the early pandemic, which might have been due to the increased need for emergency care for schizophrenia patients [13] or an increase in the incidence of the disease.
The COVID-19 pandemic has been associated with increased alcohol consumption and cigarette smoking [10]. We found that the odds of alcohol and cigarette smoking use among ED visits significantly increased in 2020, and 2021 compared with 2018, with the highest odds in 2020, another indication of the strong early impact of the pandemic. Consistently, data on ED visits in the Washington, DC/Baltimore, and Maryland areas in 2019 and 2020 revealed that a higher percentage of patients reported alcohol drinking during the pandemic [12]. Another study in Ontario, Canada, found that the proportion of all-cause ED visits due to alcohol increased by $11.4\%$ [37]. A study from Minnesota, USA, indicated a significant increase in ED visits among smokers [38]. Alcohol use and cigarette smoking have been consistently reported to have increased among ED visits during the COVID-19 pandemic [37,38], which could be a result of their increased rates during the pandemic [10] and/or possible increased emergency conditions among its users.
Opioids and cannabis had been subject to new rules and legislation at the federal and state levels prior to the pandemic [26]. Beginning in 2010, there was a decrease in opioid-associated ED visits, which coincided with federal initiatives calling for more prudent opioid prescription [26]. Policies regarding cannabis consumption mainly depend on the State authorities [26]. Nevada legalized cannabis for both medical and recreational use in 2001 and 2016 [26,39,40]. The legal use of medical and recreational cannabis went into effect in 2013 and 2017, respectively, while stricter opioid prescription laws went into effect in Nevada in 2018 [39,40]. We found that opioid-associated ED visits did not significantly increase during the pandemic compared with those in 2018, and even significantly decreased in 2021 and 2019 compared with 2018, which might be related to the strict opioid prescription laws in Nevada [39,40]. It was supposed to be the case that opioid use in all three years of 2019, 2020, and 2021 should have been lower than that in 2018. The lack of a difference between 2020 and 2018, however, may indicate that the strong early impact of the pandemic offset the strict opioid prescription laws starting in 2018 [39,40]. Using national data, Holland and colleagues found that opioid overdose-associated ED visits increased in 2020 compared with the pre-pandemic period, but they did not study those rates in 2021 [35]. Patel and colleagues investigated patients presenting to the ED with opioid overdoses at the University of Alabama in 2019 and 2020. They reported an increase in opioid overdose visits in 2020 compared with 2019 [24]. The differences between our results and those of others might be due to the use of different inclusion criteria. Our inclusion criteria were opioid use, but in other studies, their inclusion criteria were opioid overdoses [24,35]. Deaths due to opioid overdoses decreased in Nevada from 2018 to 2019, but increased from 2019 to 2020 according to a report by the Centers for Disease Control and Prevention [41].
Less information is available regarding cannabis-associated ED visits. The proportion of cannabis-associated ED visits was previously found to increase among children in 2019 compared with 2020 [2]. We found that cannabis-associated ED visits significantly increased in 2020 and decreased in 2021 compared with 2018. Nevada statistics indicate that cannabis sales increased from 2018 to 2021 [42]. Therefore, the decrease in cannabis-associated ED visits may not be due to lower consumption. More behavioral research is needed to investigate the underlying causes of this decrease and whether it will persist in the future. According to recent data, recreational cannabis legalization may be a harm-reduction strategy to combat the opioid epidemic and has been associated with reduced opioid-related ED visits, particularly among men and adults between the ages of 25 and 44 [43]. Future studies will determine whether the lack of a significant increase in opioid-related ED visits in Nevada can be attributed to the harm-reduction effects of recreational cannabis legalization.
Our study has some limitations. We identified the seven mental health and substance-use conditions using ICD codes. Any coding error could have resulted in the misclassification of ED visits. There are also other common and serious mental health and substance-use conditions. Notably, uncontrolled depression, anxiety disorders, bipolar disorders, and other mental health conditions can all contribute to suicide and have been exacerbated by the COVID-19 pandemic [12,13,17,18,19]. We were not able to accommodate all of these important conditions in our study, which aimed to investigate each condition separately, rather than combining them. Furthermore, we were unable to account for all potential confounding factors. For example, opioid use has been subject to strict federal and state regulations [26], but we did not account for this in our regression model. We analyzed the odds of certain conditions over the year. The increase in odds indicates that the proportion of ED visits for one condition versus others is higher in a given year, but it does not necessarily imply that the number of that specific type of ED visit has increased over the year. In our analysis, however, the observed increases in odds were often accompanied by corresponding increases in numbers (Table 1).
Our findings add to the literature as we analyzed longer periods during the COVID-19 pandemic and covered mental health- and substance use-associated ED visits in 2020 and 2021, whereas previous studies mainly focused on 2020 [35], from which both the early impact and the later impact of the pandemic were examined. Our findings indicate a stronger early impact than a later one. We also specifically investigated serious mental health conditions, rather than less serious mood disorders [20]. Further, we specifically looked at different mental health and substance use conditions separately, which is very important for making informed decisions. Policymakers need to know what mental health conditions or substances need more attention.
## 5. Conclusions
In conclusion, among the seven common mental health conditions and substance use-associated ED visits in Nevada, suicidal ideation, suicide attempts, schizophrenia, cigarette smoking, and alcohol drinking had significantly higher odds in both 2020 and 2021 compared with the pre-pandemic period. Cannabis-associated ED visits had significantly higher odds only in 2020 compared with the pre-pandemic period, but opioid-associated ED visits did not have significantly higher odds during the pandemic compared with the pre-pandemic period. It was observed that the pandemic had stronger early effects on mental health and the use of substances, and the effects may have decreased as the pandemic continued. This information can help policymakers to better comprehend how the outbreak affected society both stateside and nationally, make informed decisions, and develop effective policy measures and programs to prepare an effective response to large-scale public health emergencies, such as the COVID-19 pandemic, at their early stages. |
# Smoking Bans and Circulatory System Disease Mortality Reduction in Macao (China): Using GRA Models
## Abstract
This study evaluates the association between smoking rates and mortality from circulatory system diseases (CSD) after implementing a series of smoking bans in Macao (China). [ 1] Background: Macao phased in strict total smoking bans since 2012. During the past decade, smoking rates among Macao women have dropped by half. CSD mortalities in Macao also show a declining trend. [ 2] Method: Grey relational analysis (GRA) models were adopted to rank the importance of some key factors, such as income per capita, physician density, and smoking rates. Additionally, regressions were performed with the bootstrapping method. [ 3] Results: Overall, smoking rate was ranked as the most important factor affecting CSD mortality among the Macao population. It consistently remains the primary factor among Macao’s female population. Each year, on average 5 CSD-caused deaths were avoided among every 100,000 women, equivalent to about $11.45\%$ of the mean annual CSD mortality. [ 4] Conclusions: After the implementation of smoking bans in Macao, the decrease in smoking rate among women plays a primary role in the reduction in CSD mortality. To avoid excess CSD mortality due to smoking, Macao needs to continue to promote smoking cessation among the male population.
## 1. Introduction
Tobacco smoking has been well established as an independent, modifiable risk factor for premature mortality of several medical causes such as coronary, cerebral, and peripheral arterial diseases [1,2]. Cardiovascular diseases have been the leading cause of death worldwide associated with smoking [3,4]. While heavy smoking is equally hazardous to both genders, women smokers have been found to be at greater risk of smoking-related cardiovascular diseases, such as coronary heart disease [3,5], ischemic heart disease [6], and acute myocardial infarction (AMI) [7,8,9].
The WHO (World Health Organization) Framework Convention on Tobacco Control (WHO FCTC) has been recognized as the most powerful tool to counter tobacco’s negative impacts. Smoking bans are supported by WHO on the grounds that they improve health outcomes by lowering exposure to second-hand smoking (SHS) [10] or third-hand smoke (THS) [11] and potentially reducing the number of smokers. Smoking cessation by the individual is an effective measure to reduce cardiovascular diseases, also helping to reduce the economic burden of healthcare [12,13,14,15,16,17,18]. Overall, women smoke less than men, but it is noteworthy that despite significant tobacco control efforts, women’s smoking rates have barely changed and in some countries have even increased [19]. Around the world, increasingly young women (under 25 years old) are smoking tobacco [19]. Targeting tobacco smoking was especially predicted to reduce premature cardiovascular disease mortality among women in high-income Asia-Pacific and Western Europe regions [20].
## 1.1. Tobacco Smoking in China
China is the country with the greatest tobacco consumption in the world [18]. In 2018, the smoking rate among the population over the age of 15 in China was $26.6\%$, of which the male smoking rate was $50.5\%$. The size of the smoker population in China exceeds 300 million. It is estimated that more than 1 million people in China lose their lives due to tobacco smoking every year and the mortality may increase to 2 million per year by 2030 if no effective action is taken [21].
Some major central cities and provincial capital cities in China have taken strict actions in recent years to adopt smoke-free laws or implement comprehensive tobacco control actions [15,17,22]. Smoking is prohibited in all indoor workplaces, indoor public places, and public transportation in 9 of the 21 Chinese cities that have enacted smoke-free laws [22].
## 1.2. Health System and Smoking Bans in Macao (China)
The Macao Special Administrative Region of China (“Macao”) is located on the Pearl River Delta on the southeast coast of mainland China. With a total population of about 671,900 [23], Macao has the world’s highest population density of 20,620 persons per square kilometer [24]. With a life expectancy of 84.98 years [25], people over the age of 65 account for $12.1\%$ of the population [24].
Macao’s health system is a hybrid one, in which a public health provider has a key role and some private ones play supplementary roles [26]. Circulatory system diseases are the second leading cause of death in Macao, accounting for about $23.9\%$ of total deaths in 2021 [27].
Macao has grown into a major international resort city and a top destination for gambling tourism, accounting for $60\%$ of the local GDP and $70\%$ of local tax revenue. In 2019, the total number of workers in the six tourism satellite industries (gaming, retail trade, food and beverage, hotel, passenger transportation, and travel agency services) was approximately 203,000, accounting for nearly half of Macao’s working population [28]. In particular, more than twenty percent of Macao’s working population is in the gaming industry, which is infamous for heavy smoking and poor indoor air quality. It was estimated that, when there were no smoking bans, each year, 20 percent of local deaths were caused by smoking [29].
Despite strong resistance from the gaming industry [30], during the past decade, the Macao government has put significant efforts into establishing a smoke-free local environment through a variety of approaches, including legislation, law enforcement, health education, and smoking cessation aid. Smoking bans were phased in from 2012 to ensure that indoor air quality meets safety standards, protecting the health of residents and visitors alike. A partial ban with casinos as the exception in January of 2012, a full smoking ban allowing smoking lounges in local casinos in October 2014 [30], and later a blanket smoking ban without smoking lounges after 2018 were implemented in Macao [31]. Virtually all forms of tobacco advertising and promotion through any medium are prohibited.
The Macao government has demonstrated firm determination and action to enforce these smoking bans. From 2018 to 2020, tobacco control law enforcement officers of Macao inspected a total of 859,000 locations, and the total case number of prosecutions for smoking ban offenses reached 13,300 [32]. The maximum fine for smoking offenses was raised from USD 75 equivalent to USD 188. The Health Bureau of Macao provides free clinical services for smoking cessation. In August 2022, an amendment to the smoking ban was passed in Macao, prohibiting the manufacturing, transporting, distribution, importing, or exporting of e-cigarettes in and out of Macao. The retail sale, advertising, or promotion of e-cigarettes is prohibited.
Figure 1 below displays the time trend patterns of CSD mortality and smoking rate among male and female residents in Macao over the past 20 years, respectively. The overall smoking rate among the Macao population aged 15 years and above decreased from $33.7\%$ in 2011 (upon the implementation of the smoking ban) to $11.2\%$ in 2020 [33].
As displayed in Panel A of Figure 1, neither the curve of male smoking rates nor that of the male CSD mortality rates have a clear tendency. In Panel B of Figure 1, the curve of female CSD mortality rates demonstrates an observable declining tendency and is associated with an apparently declining trend in smoking rates.
The literature providing empirical evidence of the beneficial impacts of a smoking ban on circulatory system diseases is rich, especially cardiovascular diseases [4,14,15,16,17,34,35,36]. However, only a small amount of the literature examines empirical health evidence regarding smoke-free policies in China [15,17]. Currently, there has been no empirical study examining the health outcome of smoking bans in Macao during the past decade. Meanwhile, due to confounding factors such as health technology advancements in the treatment of underlying diseases, promotion of healthy lifestyles, and preventive medicine, smoking bans’ effects on health outcomes in major cities or nationwide may not be significant in empirical studies [37].
## 1.3. Aims of this Study
Applying grey relational analysis (GRA) models, this study aimed to assess the contribution of smoking bans to the decline of circulatory system diseases in Macao (China). GRA assumes a non-functional sequence model and does not generate results in conflict with qualitative analysis; its advantages include being computationally simple and not requiring large amounts of data or data normalization [38,39]. This method can be flexibly applied to various fields [38,40,41].
The findings of this study may provide an empirical evaluation of smoking bans in Macao and policy suggestions for the next stage. The empirical evidence on the positive effects of smoking bans on health outcomes will serve as a policy reference for promoting a comprehensive smoke-free policy, even among the tourism and hospitality sectors in China and other Asian nations.
## 2. Research Methods
A series of grey relational analysis (GRA) models were applied in this study to examine the role of smoking rate in reducing CSD mortalities. Ordinary least-squared (OLS) regression analysis was performed to estimate the size of the association. Due to the small observation numbers in this sample ($$n = 40$$), a bootstrapping method (with a repetition of 1000 times) was adopted to generate robust standard errors.
## 2.1. Grey Relational Analysis (GRA)
Also known as Deng’s grey incidence analysis, GRA models are based on grey system theory [40,42,43,44,45]. Grey system theory is based on the recognition and realization that all-natural and social systems are intrinsically uncertain and subject to a variety of uncertainties and noises, which are caused by internal or external disturbances as well as the limitations of human knowledge and perception. Discrepancies in the system or the available data is one of the defining qualities of an uncertain system [45]. Generally, imperfection or inaccuracy of information can be categorized into three types based on its origin (namely conceptual, level of perspective, and prediction inaccuracies). For instance, phrases, such as “large”, “small”, “fat”, “thin”, “good”, “bad”, “young”, and “beautiful”, are commonly used, but they are subjective and lack a precise meaning [45]. A system with imperfect information is represented as being midway between a white system (with perfect information) and a black system (with zero information) [45].
Based on the intuition introduced above, GRA analyzes the degrees of geometric curve similarity between data sequences. If the curve similarity is higher, the data sequences are judged to be of higher relevance, and vice versa [42,45]. In this way, a GRA model reflects the interactions between the factors examined based on the correlation coefficient of points. Specifically, Deng’s GRA mode follows the computational steps below [40,42,43,44,45,46,47].
Step 1 is to construct the reference sequence x0 (the dependent variable) and the comparison sequence xi ($i = 1$, 2, 3…, n), which serves as the independent variables [48].
Step 2 is to calculate γi (k), given observation number k ($k = 1$, 2… m), according to Equations [1]–[3]. γi (k) is also called a grey relational coefficient. [ 1]γi(x0(k),xi(k))=minmin|x0′(k)−xi′(k)|+εmaxmax|x0′(k)−xi′(k)||x0′(k)−xi′(k)|+εmaxmax|x0′(k)−xi′(k)| where [2]xi′(k)=xi(k)xi¯(k), [3]xi¯(k)=1n∑$k = 1$nxi(k) $i = 1$, 2…, n; $k = 1$, 2…, m; (k indicates observation time or observation number).
In Equation [1], ε has a value between 0 and 1, and the middle value is often assumed to be 0.5 [42,49]. ε is also called the resolution coefficient.
Step 3 uses the results of γi (k) from Step 2 to calculate βi (k), the grey relational degree, as in Equation [4]. βi (k) is also called Deng’s degree of grey incidence. [ 4]βi(k)=1n∑$k = 1$Nγi(k) The correlation degree is interpreted as a ranking order. The parameters for the GRA models range from 0–1, with it being regarded as strongly associated if it is close to 1 and frailly associated if it deviates from 1 [42,49,50]. The higher the correlation degree is, the higher the ranking is [47]. In this way, the grey relational degree reflects the inter-influences among the factors analyzed [43,45,48]. To justify the comparison of coefficients in GRA, each variable of the original data is standardized, removing the units of measurement before calculating the correlation degree.
Javed et al. [ 42,49,51] and Liu et al. [ 45] further developed and refined GRA models with computing details, including several types of “grey relational degree” numbers (or “degree of grey occurrence”). While absolute GRA utilizes the specific point grey and analyzes the correlations between factors, Relative GRA utilizes the integral visual angle [45]. Calculating the average value of absolute GRA and relative GRA, the SDGRA reflects the line of similar degree compared to the proximity of the pilot’s rate of change. It is a comprehensive indicator of sequence relationships [45]. Further, the SSGRA model was developed by calculating the average value of Deng’s GRA and absolute GRA. SSGRA has the advantage of reflecting “overall closeness between two sequences based on particular points and integral perspectives” [49].
In summary, Deng’s GRA model is typically regarded as the baseline model, while SDGRA and SSGRA models’ estimations are preferred to absolute GRA and relative GRA models [49].
## 2.2. Advantages of GRA Models
When compared to traditional statistical inference models, GRA models have several advantages for decision making [51,52]. First, GRA can effectively provide meaningful inference even with missing, insufficient, or incomplete data or with uncertainties and incomplete information [51,53].
Second, unlike statistical and probability theory models, GRA models do not need a normal distribution assumption or a big sample size [45,54]. With only a small amount of data, GRA can reliably identify key factors based on the relationship between the reference series and the comparability series data [50]. The dataset available in this study consists solely of aggregated annual disease mortality rates and smoking rates from the past 20 years, which are insufficient for performing traditional regression analysis directly. Despite their data limitations, GRA models can yield meaningful analysis results.
Third, in many decision or policy making scenarios, the order of relationship closeness determined by the grey relation degree is frequently more appropriate to use than the precise numerical values of the estimated coefficients. [ 51,53]. In this study, it was meaningful to obtain ranking information about the importance of smoking rates among multiple confounding factors, which might also contribute to the reduction in CSD mortality.
In addition to engineering [55], management [52], and environmental science [56], GRA models have been adopted in healthcare management studies to evaluate patient satisfaction [42,49], healthcare service quality [57,58], performance [59], efficiency [60,61], healthcare resource allocations [62,63], etc.
## 2.3. Ordinary Least Squared (OLS) Regression Analysis with Bootstrapping
OLS regression analysis was performed based on the relevant variables identified using GRA models. The Ramsey regression equation specification error test (RESET) was performed as a diagnostic test of regression specification error for potentially omitted variables.
To address the issue of a small sample size, we adopted the bootstrap method to generate a robust standard error. The bootstrapping method is valid for a small sample because it is a nonparametric approach for evaluating the distribution of statistics-based on random resampling [64]. Unlike a traditional parametric approach, the bootstrapping method does not depend upon strong distributional assumptions of a sample (such as i.i.d. or normal distribution). Instead, it estimates the asymptotic covariance matrix by random sampling from the empirical distribution [65].
## 2.4. Data Analysis
The baseline model of GRA in this study was Deng’s GRA model, in which the arithmetic mean is taken as the initial point [54]. SDGRA and SSGRA models were the main models. Absolute and Relative GRA were performed only for reference.
The three explanatory variables included income per capita, physician density, and smoking rate, including observations from 2000 to 2020. Explanatory variables with observations lagged one year were examined as a robustness check. Extra robustness checks included rate of alcohol use and rate of overweight from 2000 to 2015 as explanatory variables.
The Gray Level Correlation Software 7.0.1 (Grey System Research Institute, Nanjing, China) (available at http://igss.nuaa.edu.cn, accessed on 18 December 2022) was adopted to perform Deng’s, absolute, relative, and SDGRA models. The SSGRA model was computed using Microsoft Excel (Version 2002).
The Stata 14 statistical package (Stata Corp LP, College Station, TX, USA) was used to perform OLS regression analysis.
## 3.1. Data Sources and Ethical Declaration of the Data
Annual data on income per capita and physician density from the year 2001 to 2020 were obtained from the Statistics and Census Service of Macao. Annual data on resident smoking rates were collected from Macao Health Bureau and Macao Sports Bureau.
Residents’ alcohol consumption rates and obesity data were obtained from the Macao Citizen Physical Fitness Monitoring Report (2001, 2005, 2010, and 2015), which was sponsored and published by the Sports Bureau of the Macau Government. The original data of alcohol consumption rates and obesity rates comprised discrete points within the years 2001, 2005, 2010, and 2015 from four waves of the survey.
Using only the publicly available statistics data disclosed by government departments, this study did not collect any person’s data. There were no experimental designs used in this study, nor were any patients or survey respondents involved. Therefore, this study did not require extra ethics approval.
## 3.2. Dependent Variables
The mortality rate of circulatory system disease (CSD) (ICD10: I00–I99) [66] from 2001 to 2020 was analyzed in this study. CSD in the ICD10 includes: acute rheumatic fever (I00–I02); chronic rheumatic heart diseases (I05–I09); hypertensive diseases (I10–I15); ischemic heart diseases (I20–I25); pulmonary heart disease and diseases of pulmonary circulation (I26–I28); other forms of heart disease (I30–I52); cerebrovascular diseases (I60–I69); diseases of arteries, arterioles, and capillaries (I70–I79); diseases of veins, lymphatic vessels and lymph nodes, not elsewhere classified (I80–I89); and other and unspecified disorders of the circulatory system (I95–I99).
In Macao’s population, cardiovascular and cerebrovascular diseases account for more than $90\%$ of the deaths in the category of CSD [27]. Rates of the total population, male citizens, and female citizens over the studied period were analyzed separately.
## 4.1. Descriptive Statistics of the Data
Table 1 below reports the descriptive characteristics of key variables analyzed in this study. As reported in Panel A, during the studied period, CSD mortality in Macao was 86.9 per 100,000 people, with similar levels among both male and female populations. The density of physicians was about 2.4 per 1000 people in Macao.
While the population smoking rate was about $15\%$ on average during the study’s period, the smoking rate was as high as $33\%$ among men, in contrast to about $2\%$ among women in Macao.
As reported in Panel B of Table 1, the rate of alcohol use among male residents in Macao during the studied period is $46.2\%$, which is about 3.4 times the female rate. While the male rate of alcohol use in Macao had only a moderate increase of about 2.68 percentage points, the female rate of alcohol use in Macao increased from $10.2\%$ in 2001 to $16.96\%$ in 2015. In contrast to the stable overweight and obesity rate among women aged 40 and over, the rate among men increased by about 15.6 percentage points from $29.1\%$ in 2001 to $44.7\%$ in 2015.
## 4.2. Results of GRA Models
GRA models were applied to analyze the data, ranking the importance of the relevant factors of CSD mortality in Macao from 2001 to 2020. As reported in Table 2, the baseline model results show that, for the male population in Macao, physician density and smoking rates are ranked as the most important determinants. The results are consistent among all GRA models. For the female population, physician density and smoking rates are also important determinants, while Deng’s GRA and SDGRA models show that the women’s smoking rate is ranked as the most important one. The ranking pattern of the total population largely is a mixture.
Considering the time-lag effects of physician density (primary care) and smoking rates, we further lagged the explanatory variables for a one-year period and performed the same analysis. As reported in Table 3, the GRA ranks of relevant male CSD mortality factors are consistent with the baseline mode. As for the female population, all models, except the relative GRA model, consistently estimated smoking rate as no. 1. In particular, Deng’s GRA, SDGRA, and SSGRA models were found to be the most comprehensive and effective evaluation models.
For robustness checking, GRA tests were expanded by adding two extra variables: the rate of alcohol use and the rate of overweight and obesity among people aged 40 and older from 2001 to 2015. As reported in Table 4, the GRA models suggest that, for the male population, rate of alcohol use and physician densities are the leading factors relevant to CSD mortalities during the study period. For female CSD mortalities, smoking rate is consistently rated as the most significant relevant factor, followed by the risk factor of overweight and obesity (Aged >40).
## 4.3. Regression Analysis of CSD Mortality in Macao
Based on the GRA results, we also performed regression analysis on CSD mortalities and relevant factors as shown in Table 5. The dependent variable was the annual CSD mortalities, which included 40 observations from the male and female population. Smoking rate and physician density were included as two explanatory variables because these two factors are rated by GRA methods as the most important factors. A dummy variable of “female after smoking ban” was generated to indicate the female observations after the implementation of the smoking ban in 2012 (included). The estimated coefficient of this variable captured the excess changes in female mortality after the implementation of the smoking ban.
Column [1] reports the regression analysis results of the baseline model. While physician density has a highly significant negative association with CSD mortality, the coefficient of smoking rate is insignificant. This may be mainly due to the insignificant effects of smoking rates among men.
Column [2] is the full model, including the variable of interest “female after smoking ban”. As reported in Column [2], while an increase in physician density (1 per 1000 people) is significantly associated with a reduction of about 8.95 CSD deaths per 100,000 people, the CSD mortality in Macao after the implementation of the smoking ban in 2012 has an extra annual reduction of 5 deaths for every 100,000 women. In addition, the R-squared is 0.283, which is the highest one among the three specifications tested.
Column [3] reports results without including the smoking rate in the regression, and the results are robust.
The bottom line of Table 5 reports the results of the Ramsey RESET test, which indicates no evidence of omitted variables in all three specifications.
## 5. Discussion
Applying GRA models, this study examined the key factors associated with CSD mortalities in Macao (China) from 2001 to 2020. The findings of this study based on GRA models indicate that smoking rate is consistently the most important factor associated with women’s CSD mortality in Macao, while physician density was ranked as the second most important factor. In contrast, for men’s CSD mortality in Macao, the physician density was estimated as the most important factor, while smoking rate has a secondary role.
These findings suggest that women in Macao may have obtained substantial health benefits from smoking bans, while men had few. Two major reasons for this difference may be considered. First, women may have directly benefited from their own smoking cessation encouraged and supported by public health policies, as evidenced by a significantly lower smoking rate following the smoking bans [18]. Second, after full smoking bans in the local community, women usually have extra health benefits from less exposure to secondhand smoke (SHS) [16]. In particular, decreased exposure to SHS among nonsmokers may result in a decrease in myocardial infarctions [14].
This study’s estimate of the smoking ban’s health outcome effect size is comparable to those that have been reported for mainland China. This study estimates that, owing to the smoking bans in Macao since 2012, on average each year about 5 CSD deaths were avoided among every 100,000 women, equivalent to about $11.45\%$ of the mean annual CSD mortality (43.66 per 100,000 people), or about 16.8 CSD deaths among the Macao female population. A study of Beijing’s 2015 tobacco control policy package estimated that the associated drop in hospital admissions for cardiovascular diseases was overall more than $10\%$ [75]. In another study, Zheng et al. estimated reductions in hospital admissions were about $5.4\%$ of AMI and $5.6\%$ of stroke cases [76]. Additionally, the increasing trend in stroke admission events was reduced by $15.3\%$ [76]. Research studying Tianjin (China) found the mortality rate from AMI decreased by $16\%$ per year, while the mortality rate of stroke among those under 35 decreased by $2\%$ annually after the implementation of smoke-free legislation [17].
Meanwhile, the findings of this study indicate that Macao’s smoking bans did not successfully achieve their health goals among the male population. The full smoking ban in Macao’s casinos was expected to help many casino workers quit smoking. According to a casino employee survey in 2008 (before the implementation of the initial smoking ban in Macao), more than half of the respondents ($$n = 315$$, men = 165, $52.4\%$) reported that they would try to quit smoking if smoking was outlawed at work [77]. Nevertheless, despite the eventual implementation of the smoking ban, the men’s smoking rate in Macao did not significantly decline and the associated mortality could not be avoided.
In addition, the findings of this study also reveal that alcohol using may be among the leading risk factors for CSD mortality among men in Macao. This is a similar health concern among men in mainland China [78], and most Chinese people are unaware of the rigorous evidence-based public health warning that zero consumption is the only safe alcohol usage level.
This study has several limitations. First, the analysis of this study is largely limited by the availability of the data. An overall reduction in female mortality and smoking rates was observed, but no information is available regarding associated societal health inequalities, such as the differences between lower and higher SES groups. Second, the GRA method has a limitation in that GRA models with different estimation approaches may sometimes produce different analysis results. The ranking orders predicted by GRA models lack precise numerical values, such as the number of deaths avoided, for further policy impact analysis. To address concerns about the gender gap in smoking cessation, future research should focus on identifying male-specific smoking cessation barriers in the context of complete smoking bans. Whereas internal barriers, such as stress and cravings, emerged to be more prominent in women, external barriers, such as the widespread availability of cigarettes and the social aspects of smoking, were more prevalent in men [79].
## 6. Conclusions
Applying GRA models, this study found that smoking rate is rated as the most important factor associated with women’s CSD mortality in Macao between the years of 2001 and 2020. Women’s CSD mortality decreased annually by $11.45\%$ as a result of smoking cessation, but there were no comparable results for men.
The findings of this study have significant public health policy implications for Macao. Although smoking bans have been successfully implemented in the city’s casinos, Macao nevertheless needs to focus on reducing the male smoking rate and promoting a healthy lifestyle. It is recommended that mainland China and other emerging economies with high smoking rates adopt total smoking bans and strong legal enforcement. Additional steps, including health education and cessation support services, should be taken to further realize smoking rate reduction in high-risk population groups. |
# Impact of Contextual-Level Social Determinants of Health on Newer Antidiabetic Drug Adoption in Patients with Type 2 Diabetes
## Abstract
Background: We aimed to investigate the association between contextual-level social determinants of health (SDoH) and the use of novel antidiabetic drugs (ADD), including sodium-glucose cotransporter-2 inhibitors (SGLT2i) and glucagon-like peptide-1 receptor agonists (GLP1a) for patients with type 2 diabetes (T2D), and whether the association varies across racial and ethnic groups. Methods: Using electronic health records from the OneFlorida+ network, we assembled a cohort of T2D patients who initiated a second-line ADD in 2015–2020. A set of 81 contextual-level SDoH documenting social and built environment were spatiotemporally linked to individuals based on their residential histories. We assessed the association between the contextual-level SDoH and initiation of SGTL2i/GLP1a and determined their effects across racial groups, adjusting for clinical factors. Results: Of 28,874 individuals, $61\%$ were women, and the mean age was 58 (±15) years. Two contextual-level SDoH factors identified as significantly associated with SGLT2i/GLP1a use were neighborhood deprivation index (odds ratio [OR] 0.87, $95\%$ confidence interval [CI] 0.81–0.94) and the percent of vacant addresses in the neighborhood (OR 0.91, $95\%$ CI 0.85–0.98). Patients living in such neighborhoods are less likely to be prescribed with newer ADD. There was no interaction between race-ethnicity and SDoH on the use of newer ADD. However, in the overall cohort, the non-Hispanic Black individuals were less likely to use newer ADD than the non-Hispanic White individuals (OR 0.82, $95\%$ CI 0.76–0.88). Conclusion: *Using a* data-driven approach, we identified the key contextual-level SDoH factors associated with not following evidence-based treatment of T2D. Further investigations are needed to examine the mechanisms underlying these associations.
## 1. Introduction
More than 100,000 individuals die from diabetes each year in the United States (US) [1]. Of these deaths, $60\%$ are attributed to concurrent cardiovascular disease (CVD), with myocardial infarction being the most common cause [2]. Among the antidiabetic drugs (ADD) currently available on the US market, two relatively novel agents, sodium-glucose cotransporter-2 inhibitors (SGLT2i) and glucagon-like peptide-1 receptor agonists (GLP1a), are associated with significant reductions in blood glucose levels and have been found particularly effective in reducing the risk of CVD in individuals with type 2 diabetes (T2D) [3]. In addition, these novel antidiabetic agents have been shown to associate with weight loss, reduced risk of hypoglycemia and cardiorenal protection, favorable benefits that are of great importance to patients with T2D [3]. The American Diabetes Association (ADA) recommends SGLT2i and GLP1a for patients with T2D who have CVD, heart failure, chronic kidney disease, or an increased risk of these conditions, regardless of their glycemic status [4,5].
However, the utilization of SGLT2i and GLP1a in real-world T2D patient populations is relatively low in the US compared to other ADD [6,7], especially among historically marginalized communities, such as racial and ethnic minority groups and individuals experiencing socioeconomic disadvantages. Data from commercial insurance and Medicare, for example, showed that Black patients were 10–$20\%$ less likely to receive newer ADD than White patients [7,8,9,10]. While such disparity can be explained overall by racial disparity as a distal cause, its proximal cause—the underlying mechanism whereby racial and ethnic groups have initiated SGLT2i/GLP1a—remains largely unknown.
In the past, research and clinical approaches centered on the individual-level have led improvements in self-management outcomes and reduction in cardiovascular risk among patients with T2D [11]. More recently, researchers have acknowledged the need to consider external factors, namely the social determinants of health (SDoH) to achieve the goal of sustainable improvement in diabetes outcomes [12]. SDoH refer to the various social, economic, and environmental factors, including access to healthcare, education, employment, housing, and social support that have an impact on people’s health, well-being, and quality of life [13]. Contextual-level SDoH refers to the broader social and built factors within community or region that influence health outcomes, and are increasingly recognized as a vital source of information to develop healthcare policies designed to improve population health management and value-based care [14,15]. Previous studies have demonstrated the association of contextual-level SDoH with geographic variation and diabetes risk [16]. However, minimal data exist on the extent to which contextual-level SDoH (e.g., residential segregation, food environment, and neighborhood walkability) may impact healthcare use, including initiating evidence-based treatment in T2D care [8]. A Dutch study published in 2012 examined the association of regional-level aging composition and socioeconomic status with spatial variation in ADD use but without a comprehensive evaluation of multiple contextual-level SDoH [17].
Given that race and ethnicity are social constructs [18], contextual-level SDoH can play important roles in the development of racial and ethnic disparities across geographic regions [19]. Therefore, understanding how contextual-level SDoH impact the adoption of these outcome-improving therapies in millions of Americans with T2D is imperative. Accordingly, this study aimed to examine the association between patients’ contextual-level SDoH and their initiation of the newer ADD, and how such associations may vary across racial and ethnic groups. With such empirical evidence, the racial disparity in SGLT2i/GLP1a utilization can be better understood, and relevant policymaking can be better guided.
## 2.1. Data Source and Study Population
This is a retrospective cohort study using data from the OneFlorida+ network, containing large collections of electronic health records (EHR) covering more than 19 million patients from Florida (~16.8 million), Georgia (~2.1 million), and Alabama (~9.1 thousand) [20]. We assembled a cohort of adults (i.e., aged ≥ 18) identified as having at least one inpatient or outpatient T2D diagnosis (using ICD-9 codes 250.x0 or 250.x2, or ICD-10 code E11) and ≥1 ADD prescription. The algorithm used to identify T2D has been validated in OneFlorida+ with a positive predictive value (PPV) > 94 [21] and is preferred over using only diagnosis codes, which can lead to misclassification error [22]. Among the T2D cohort, we identified individuals who initiated SGLT2i or GLP1a, or another second-line ADD (i.e., dipeptidyl-peptidase-4 inhibitors, sulfonylureas, thiazolidinediones, and basal insulin) in 2015–2020. The index date was the day of the first prescription of a second line ADD, defined as no use of the drug in three prior years. We restricted the study cohort by only including those individuals who had ≥ 2 inpatient or outpatient encounters per year in OneFlorida+ in the three years prior to the index date to obtain complete information for modelling.
## 2.2. Study Outcome and Covariates
The outcome was the initiation of a newer ADD (i.e., SGLT2i or GLP1a) versus another second-line drug. We collected baseline demographic and clinical information on or within the 3-year period prior to the index date, including age, sex, race-ethnicity (non-Hispanic White [NHW], non-Hispanic Black [NHB], Hispanic, and other), rurality (defined using linkage to rural–urban continuum codes [RUCC] based on patients’ residencies’ Federal Information Processing System [FIPS] county code and classified the rurality into three levels by the US Department of Agriculture’s (USDA) Economic Research Service: RUCC ≤ 3 as metropolitan; 3 < RUCC ≤ 7 as urban; and 7 < RUCC ≤ 9 as rural), primary payer (Medicare, Medicaid, private insurance, no insurance, and other), diabetes complications and comorbidities (such as cardiovascular disease and chronic kidney disease), co-medications (i.e., use of another ADD, antihypertensives, statins, and antidepressants), clinical presentation (most recent blood pressure and body mass index [BMI], identified in four categories: ≤25, 25–30, 30–100 kg/m2, or missing), and lab values (most recent hemoglobin A1c [HbA1c], identified in four categories: ≤7, 7–10, 10–21 mmHg, or missing). Clinical data were extracted from de-identified EHR records in the OneFlorida+ network.
## 2.3. Contextual-Level SDoH
We obtained data on built and social environment measures from six well-validated sources with different spatiotemporal scales, characterizing food access, walkability, vacant land, neighborhood disadvantage, social capital, crime and safety. All measures were spatiotemporally linked to each individual considering residential mobility during the study period. Area-weighted averages were first calculated according to a 250 m buffer around the centroid of each 9-digit ZIP code. Time-weighted averages were then calculated, accounting for each individual’s residential history.
Table 1 summarizes the contextual-level data sources and the corresponding spatiotemporal scales. A total of 43 food access measures at census tract level in 2015 and 2019 were obtained from USDA’s Food Access Research Atlas [23]. Walkability was assessed using the National Walkability Index developed by the US Environmental Protection Agency (EPA) [24], which assesses walkability on a scale from 1 to 20 for each census block group, with 1 indicating the least walkable and 20 the most walkable. Vacant land measures at the census-tract level from 2015 to 2019 were obtained from the US Department of Housing and Urban Development aggregated with US Postal Service administrative data [25] and a total of 18 measures that were available across all years were included. The neighborhood deprivation index (NDI), a socioeconomic status measure, was obtained at the census block group level based on data from the 2015 to 2019 American Community Survey (ACS). It yields information on the income, education, employment, and housing quality of a neighborhood and allows ranking by socioeconomic disadvantage [26]. In addition, ten social capital measures were constructed using the Census Business *Pattern data* based on the North American Industry Classification System (NAICS) codes [27] at the 5-digit ZIP code tabulation area (ZCTA5) level. Furthermore, eight county-level annual measures of crime and safety were obtained from the Uniform Crime Reporting Program from 2015 to 2019 [28]. A total of 81 SDoH measures were included in the analyses.
## 2.4. Statistical Analysis
We conducted normalization transformations for all continuous contextual-level SDoH variables using the bestNormalize package in R, which implements several transformation methods, including log, square root, exponential, arcsinh, box cox, and Yeo-Johnson transformations [29]. The best transformation was determined based on Pearson P statistics. All continuous variables were also z-score standardized (mean = 0 and standard deviation = 1). All contextual-level SDoH factors and covariates of interest described above had missing values for <$2\%$ of the participants; *Missing data* for all contextual-level SDoH factors were imputed using the chained equations method of the MICE package in R. A variable was considered a predictor in the imputation model if its proportion of non-missing values among counties with missing values in the variable to be imputed was larger than $40\%$ and they were correlated (i.e., with the absolute correlation value > 0.4) with the variable to be imputed or the probability of the variable being missing. We imputed a single dataset given the minimal impacts of the imputation procedure due to the large sample size and a small fraction of missing data. Missing information on BMI and HbA1c were not imputed and maintained as a separate category.
We used a two-phase approach to identify key contextual-level associated with initiation of SGTL2i/GLP1a versus other second-line ADD [30,31]. In Phase 1, we randomly split the data into a $50\%$ discovery set and a $50\%$ replication set. We considered all the 83 contextual-level SDoH for associations with newer ADD initiation after accounting for multiple comparisons. We built multivariable logistic regression models for each contextual-level factor after adjusting for demographics, urbanicity, diabetes complications, co-medications, clinical presentation, and primary payer. To account for the multiple testing, the Benjamin-Hochberg procedure was used to control the false discovery rate (FDR) at $5\%$ [32]. A variable was considered significant if it had an FDR-adjusted p-value (or q-value) < 0.05 in both the discovery and the replication sets. A correlation heatmap was generated to show the pairwise Pearson correlations of the variables retained from Phase 1. Variables from highly correlated pairs (with the absolute value of correlation coefficients > 0.6) were removed to avoid collinearity between variables [33].
In Phase 2, we used a multivariable logistic regression model including all significant variables identified from Phase 1 as well as all the demographic and clinical information, including age, sex, primary payer, BMI, HbA1c, type of residence, cardiovascular disease, chronic kidney disease, use of insulin and non-insulin antidiabetic medications to estimate the effect sizes. Adjusted odds ratios (aOR) and $95\%$ confidence intervals (CI) were reported.
For the key contextual-level SDoH identified using the two-phase approach, we dichotomized them using the 80th percentile from the key variables as the cutoff. A Higher numeric value in NDI and percent of vacant addresses indicates a neighborhood that is more disadvantaged in socioeconomic profile and has a larger vacancy in addresses. Therefore, we defined neighborhoods with the top 20th percentile in NDI as more deprived neighborhoods, and neighborhoods with the top 20th percentile in percent of vacant addresses as neighborhoods with more occupancy. We applied multilevel logistic regression and adjusted for demographic and clinical characteristics to determine the effect variation by race-ethnicity of key contextual-level SDoH in association with newer ADD initiation.
Analyses were performed using the R statistical software (version 3.6.1; R Development Core Team) and SAS 9.4 (Cary, North Carolina). The study was approved by the Institutional Review Board at the University of Florida (IRB202102283).
## 3.1. Descriptive Analysis
Our final analysis comprised 28,874 patients in the cohort. Table 2 highlights the demographic and clinical characteristics of the study population by race and ethnicity. Overall, the mean age was 58 (±15) years, and $61\%$ were women. The majority of the patients were enrolled in public insurance programs such as Medicare ($37\%$) and Medicaid ($35\%$). Compared with NHW patients, NHB patients were younger (54.6 vs. 58.5 years, $p \leq 0.01$) and more likely to be covered by Medicaid ($41\%$ vs. $28\%$, $p \leq 0.01$), while Hispanics and patients of other races were older (mean age of Hispanics: 61 years, other race/ethnicity: 60 years), and more likely to be women. Of our cohort, 11,649 patients ($40\%$) had initiated the newer ADD (i.e., SGLT2i or GLP1a). NHW and patients of other races/ethnicities were more likely to have initiated a newer ADD versus another second-line ADD compared to NHB (NHW and other race/ethnicity: both $44\%$, NHB: $38\%$, Hispanics: $35\%$, $p \leq 0.01$)
## 3.2. Selection of Contextual-Level SDoH
Figure 1 is a volcano plot summarizing the results from Phase 1. After accounting for multiple comparisons using the Benjamin Hochberg procedure, a total of 20 and 11 variables were significantly associated with novel ADD use in the discovery and replication sets, respectively. Among them, ten variables from three categories were significant in both the discovery and replication sets, including the NDI, percentage of low food access (percentage without vehicle access living a half-mile from supply, a food access measure variable), and eight variables documenting the vacant housing in the neighborhood. All ten variables were associated with a lower likelihood of initiating newer ADD (with OR < 1, Figure 1). We observed high correlations among the eight variables documenting vacant land measures (all pairwise correlation coefficients > 0.6, Appendix A, Figure A1). Therefore, we kept only one variable, the percent of vacant addresses in the Phase 2 analysis, as this variable is a more comprehensive measure than the others in the category.
In Phase 2 analysis, the NDI, percentage without vehicle access living a half mile from supply, and percent of vacant addresses, were simultaneously included in a multivariable logistic regression model after adjusting for baseline demographic and clinical information. Two variables—NDI and percent of vacant addresses—remained statistically significant in the multivariable model. Therefore, our two-phase approach identified two contextual-level SDoH that were significantly associated with a lower likelihood of newer ADD initiation, which are neighborhoods with a higher degree of deprivation and neighborhoods with more vacant housing (Table 3).
## 3.3. Association of Contextual-Level SDoH and New ADD Initiation across Racial and Ethnic Groups
Table 4 shows the results from multivariable logistic regression of binary key contextual-level SDoH variables in association with the novel ADD initiation in the overall cohort and in each racial-ethnic subgroup. In the overall cohort, NHB were significantly less likely to use newer ADD than NHW (aOR 0.82, $95\%$ CI: 0.76–0.88, $p \leq 0.01$) after adjusting for all the covariates listed above. Patients living in a more deprived neighborhood were associated with a significantly lower likelihood of initiating a newer ADD than the remaining patients (aOR 0.87, $95\%$ CI: 0.81–0.94, $p \leq 0.01$). Patients living in a neighborhood with more occupancy were less likely to initiate a newer ADD (aOR: 0.91, $95\%$ CI: 0.87–0.95, $p \leq 0.01$) than their counterparts. We observed similar trends in racial and ethnic subgroups, and no significant interaction of race/ethnicity and contextual-level SDoH was detected.
## 4. Discussion
SDoH are not only experienced by individuals but also exert influence at the community level. Community-level information about the neighborhoods in which individuals live, learn, work, and play is recognized as the community’s vital signs [18], conveying contextual-level social deprivation and impacting health risks. Our study is unique in linking a set of contextual-level factors documenting social and built environments to extensive collections of EHR data via individuals’ residential histories in a cohort of real-world patients with T2D. Using a data-driven approach, we determined the key contextual-level SDoH factors associated with evidence-based treatment for T2D. After accounting for multiple testing and high correlations among the exposures, two contextual-level SDoH variables characterizing the neighborhood deprivation and vacant housing were identified as being significantly associated with individuals’ limited initiation of newer ADD known to improve cardiorenal outcomes of T2D. These results provide evidence supporting a spatially explicit data-driven approach in developing interventions to address disparities in initiation of T2D treatment.
Increasing evidence has demonstrated an association between neighborhood factors and diabetes outcomes. For example, a more disadvantaged socioeconomic status, poorer food access and built environment (e.g., walkability, recreational facilities), and less social cohesion are associated with the risk of T2D [34,35,36,37]. Additionally, lower neighborhood socioeconomic status was significantly associated with worsening physical and mental health status and poor glycemic control among patients with diabetes [38,39]. However, very few studies have examined whether contextual-level SDoH may influence healthcare quality, such as the initiation of evidence-based treatment. A study conducted using claims data found that contextual-level SDoH, such as poor food access, weak social support, and lack of a healthy built environment, were significantly associated with non-adherence to antihypertensive medication [40]. A randomized trial that enrolled 749 Mexican–American patients at a university-affiliated clinic showed that patients who lived in neighborhoods with greater deprivation were much less likely to adhere to their ADD protocols than those living in neighborhoods in the next higher quartile on the deprivation index [41]. In a US-based study examining the association between neighborhood social environment factors and adherence to oral antidiabetic medications, residents living in neighborhoods with high sociability were more likely to adhere to ADD regimens than their counterparts in less sociable surroundings [42].
The current study found that the NDI, an index documenting neighborhood deprivation, was significantly associated with newer ADD initiation. NDI is a composite indicator of contextual-level socioeconomic disadvantages in four areas beyond the strictly specified healthcare setting: income, housing quality, employment, and education. Previous studies have documented the association between neighborhood deprivation, attributed to income, employment, and education, and the quality of diabetes care, reporting that patients living in more deprived neighborhoods were significantly less likely to obtain high-quality diabetes care [7,8,10]. At an individual level, a lack of income and unemployment can create barriers to accessing high-quality diabetes care, while a lack of education has been linked to poor health literacy [43]. At the contextual level, the role of political context could also shape socioeconomic factors, and this interplay could result in unequal resource distribution and structural inequalities in the neighborhoods that perpetuate health disparities [44]. Therefore, individuals with a low socioeconomic profile at the contextual level may face barriers to the use of novel ADD treatment.
The consequences of vacant housing can extend far beyond just an empty space. Vacant land usually is an indicator of population out-migration and disinvestment. In addition to the increased risk of violence and crime [45], vacant housing often leads to a reduction in business and employment, therefore resulting in a lack of community resources, as well as access to essential facilities such as food, medical and social support services [46,47], further exacerbating health disparities in these communities. This lack of resources can have far-reaching consequences on the health and well-being of individuals residing in these areas. Previous studies have shown that empty lots are associated with higher levels of chronic stress and fewer social interactions, and thus resulting in unfavorable health outcomes [25]. In our study, individuals living in a neighborhood with more vacant addresses had lower access to the newer ADD, which could be explained by the lack of access to high-quality diabetes care. It is essential to address the issue of vacant housing and provide necessary resources and support to such disadvantaged communities. Developing innovative strategies, such as mobile medical clinics, have been effective in serving the requirements of medically vulnerable populations, such as the urban poor [48] and populations without stable housing [49], for whom accessibility to fixed healthcare is limited due to the lack of facilities and meager financial resources. MMCs could improve access to care by overcoming geographic and social restrictions, such as neighborhoods with many vacant addresses, which traditional, permanent healthcare facilities must avoid, thus addressing health inequities and mitigating social obstacles to healthcare.
Despite having a disproportionately higher risk of cardiovascular disease, patients from racial and ethnic minority groups have a lower probability of initiating guideline-based therapies that improve their outcomes, including the uptake of new ADD [9,50]. It is suggested that differences might be driven by the disadvantages in insurance coverage and poor socioeconomic status among these racial and ethnic subgroups, and it has been acknowledged in several studies that Medicare Advantage enrollees are less likely to initiate newer ADD than commercial insured patients [7,8,9]. However, in our study, the racial and ethnic disparities in new ADD use persisted after adjusting not only for insurance, but also for NDI, a proxy to socioeconomic status. This represents that such disparity was not driven solely by insurance factors and socioeconomic status. However, we did not identify a significant interaction between race/ethnicity and key contextual-level SDoH in association with initiating newer ADD. While it is possible that the interaction lies elsewhere and was not captured using the two-phase method presented in this study, our findings highlight the structural–environmental factors that drive inequities in the use of evidence-based treatment, independent of race and ethnicity.
Our study has several limitations. First, our study does not exclude patients with gestational diabetes, and there is a possibility of misclassification for individuals with pregnancy and gestational diabetes but not diagnosed by physicians. Second, the two-phase approach we used did not consider non-linear associations and potential interactions. Generalize additive model could be considered in future work to account for the non-linear relationships among key contextual-level SDoH in association with the study outcome. Additionally, Bayesian kernel machine regression and Bayesian multiple index models can capture the complex interrelations among contextual-level SDoH. Third, although many contextual-level SDoH have been included to characterize the social and built environment, this list is not exhaustive. Continuing efforts are needed to improve the measurement of the contextual-level SDoH further. Fourth, our study cohort was constructed using EHR data, and we cannot completely preclude the prevalence of users of second-line ADD. However, we extended our baseline to three years and restricted individuals who had at least two encounters per year to capture prescription and medical information, which largely eliminated the cases of prevalent users. In addition, regarding the association between individual-level factors and newer ADD initiation, our results were consistent with prior studies using claims data [50], suggesting the validity of the current study’s findings. Finally, participants included in this study were limited to those who received care at one or more sites included in the OneFlorida+ Clinical Research Network. Thus, our results may not be generalizable to those who did not receive healthcare at one of these facilities.
## 5. Conclusions
In a cohort of T2D patients from a statewide network of EHR, we identified two key contextual-level SDoH factors associated with limited use of new ADD: individuals living in neighborhoods with a higher deprivation index and more vacant addresses were less likely to initiate newer ADD compared with those living in less deprived and more fully occupied neighborhoods. Although the specific mechanisms underlying these associations require further investigation, our findings have contributed to the growing body of evidence of the neighborhood-level factors, their interplay with race across various spatial contexts, and their circumstances on evidence-based healthcare. It is crucial to gain a comprehensive understanding of these complex factors to develop effective strategies for addressing health equities and promoting evidence-based treatment in T2D care. |
# Ambient Environmental Ozone and Variation of Fractional Exhaled Nitric Oxide (FeNO) in Hairdressers and Healthcare Workers
## Abstract
Fractional exhaled nitric oxide (FeNO) is a breath-related biomarker of eosinophilic asthma. The aim of this study was to investigate FeNO variations due to environmental or occupational exposures in respiratory healthy subjects. Overall, 14 hairdressers and 15 healthcare workers in Oslo were followed for 5 workdays. We registered the levels of FeNO after commuting and arriving at the workspace and after ≥3 h of work, in addition to symptoms of cold, commuting method, and hair treatments that were performed. Both short- and intermediate-term effects after exposure were evaluated. Environmental assessment of daily average levels of air quality particulate matter 2.5 (PM2.5), particulate matter 10 (PM10), nitrogen dioxide (NO2), sulphur dioxide (SO2), and ozone (O3) indicated a covariation in ozone and FeNO in which a 35–$50\%$ decrease in ozone was followed by a near $20\%$ decrease in FeNO with a 24-h latency. Pedestrians had significantly increased FeNO readings. Symptoms of cold were associated with a significant increase in FeNO readings. We did not find any FeNO increase of statistical significance after occupational chemical exposure to hair treatments. The findings may be of clinical, environmental and occupational importance.
## 1. Introduction
Nitric oxide (NO) was first detected as an intracellular messenger in various cells, such as in platelets, the nervous system, and vasculature, where it is known for its relaxing activity related to the endothelium and vasodilation and as an effector molecule in immunological reactions [1]. It was later measured in exhaled breath as fractional exhaled NO (FeNO) and was shown to be increased in patients with asthma [2,3]. Additionally, NO has been implicated in several inflammatory diseases, obesity, diabetes, and heart disease [4]. NO is regulated by nitric oxide synthase, and three major isoforms have been identified, of which inducible nitric oxide synthase (iNOS or NOS2) is associated with immunoregulation by cytokines and other stimuli in both the innate and the adaptive immune system [5,6]. Activated macrophages and other innate immune cells generate NO as a pro-inflammatory response to various pathogens [7]. More recent research has shown that iNOS is regulated on the epigenetic level by DNA methylation after environmental and occupational exposures [8,9,10]. In asthma, iNOS has predominantly been associated with the regulation of T-cell function and differentiation [11]. High FeNO values are associated with allergic/eosinophilic inflammation, also known as Type 2 inflammation [12]. FeNO can be used as an indicator of inhaled corticosteroid response and has been present in clinical use for several years as an evaluation tool for asthma control [13]. Diurnal variations of FeNO have been recorded in the airways of healthy subjects from roughly 5 to 20 ppb (parts per billion), in controlled asthmatics from 20 to 40 ppb, and in uncontrolled asthmatics from 20 to 70 ppb [14].
Previously, FeNO was found to be significantly elevated in a cohort of welders at levels that were normally associated with Type 2 inflammation (median 43.5 ppb) [15]. Welders are exposed to respiratory irritants (particulate matter, gases and smoke). The occupational hazards in a hairdressing salon are complex and include many respiratory irritants and allergens [16]. To our knowledge, there are no previous studies investigating FeNO variations after exposure in hair salons.
Increased air levels of particulate matter 2.5 (PM2.5), particulate matter 10 (PM10), nitrogen dioxide (NO2), sulphur dioxide (SO2), and ozone (O3) exacerbate asthma in children and adults and are associated with the onset of childhood asthma [17]. Normal and increased levels of FeNO are difficult to interpret in both the diagnosis and treatment of asthma [18]. FeNO variations require further explanation with respect to their role in identifying and treating respiratory diseases and environmental and occupational exposure.
The aim of the study was to investigate short- and intermediate-term FeNO variations after environmental and occupational exposures in hairdressers and healthcare workers (HCWs).
## 2.1. Study Design
Non-smoking subjects that were aged 18 years or older and scheduled for 5 working days each week were included in this study. The exclusion criteria were active smoking and physician-diagnosed respiratory diseases. The study was set up as an observational study of hairdressers and HCWs. A total of 15 hairdressers working at six hair salons in downtown Oslo covering an area of about one square kilometer (~0.4 square miles) were recruited via invitation. However, one hairdresser was considered to be an outlier (FeNO > 60 ppb) due to a probable respiratory disease. A total of 15 HCWs working at outpatient clinics or as technical assistants were recruited from Oslo University Hospital at two different locations 3–6 km (1.9–3.7 miles) from the downtown area. The HCWs were both an occupational control, with respect to the hairdressers, and an environmentally exposed group, as they experienced similar exposures as the hairdressers by living in or close to Oslo. The study was performed over two consecutive work weeks due to logistics and to minimize sensor variations by using the same FeNO testing instrument sensor. A questionnaire was included in the case report form, in which daily questions were asked and noted by the investigators at all sampling times. Daily questions included symptoms, mode of commuting and traveling time, exposure to smoke or vaping fumes, and the type and number of hair treatments that were performed. Most hairdressers started their workday 09:00–11:00 a.m., whereas HCWs started their workday 08:00–09:00 a.m. Due to sampling logistics, a few hairdressers started their day before the first sampling; however, the samples were taken within the first 60 min of work, and the occupational exposures in these instances were regarded as small. The daily sampling of the hairdressers followed their usual work week from Monday to Saturday with one day off. Only four hairdressers worked Saturday. All of the HCWs worked Monday to Friday.
A network of air pollution detectors is easily accessible in Norway that provides information on per-minute, average hour, and average daily levels of PM2.5, PM10, NO2, SO2, and ozone (www.nilu.no (accessed on 29 December 2022)).
## 2.2. FeNO Measurement and NIOX VERO© Repeatability
A portable NIOX VERO® device was used to measure the level of FeNO in ppb. It had a disposable mouthpiece with a filter. A fresh pre-calibrated sensor that included 300 measurements was used for the whole study and during the repeatability tests. FeNO was measured after exhalation, followed by inhalation through the mouthpiece and a NO-scrubber for NO-free air supply, and lastly, by exhalation trough the mouthpiece with a respiratory rate of 50 mL/s (±5 mL/s) for 10 s. There was one unexpectedly high within-day change (~20 to 40 ppb) of measurement that was repeated with a fresh mouthpiece, with only one ppb change. To test for instrument repeatability, four subjects in the HCW group were selected. They performed five additional continuous FeNO measurements with fresh mouthpieces (the measurements are shown in Supplementary Materials S1). The coefficient of variation (CV) was calculated as within-subject SD/within-subject mean, as described previously [19]. Variation was estimated to be $8.5\%$ ± $3.5\%$. The manufacturer of NIOX VERO© provides the following precision data: <3 ppb of measured value for values <30 ppb, <$10\%$ of measured value for values ≥30 ppb.
## 2.3. Statistical Analysis
FeNO, as the primary end-point, was tested as a continuous variable, in addition to sampling and commuting time. Normality distribution tests (Kolmogorov–Smirnov and Shapiro–Wilk) showed both normality and non-normality for FeNO and other key variables. We applied both parametric tests with Student’s t-test unpaired and paired with Levene’s test for equality of variances and a non-parametric test with Mann–Whitney and histogram distribution. The statistical analyses were performed by using SPSS v28 and Graphpad Prism 9.2.0.
## 2.4. Ethics
All of the participants provided informed consent, and we can confirm that all of the research was performed in accordance with relevant guidelines/regulations. The study was approved by the regional ethics committee (case no. 480861), the Hospital Data protection officer (case no. 22–16786), and registered at www.clinicaltrials.gov (accessed on 29 December 2022) (Identifier: NCT05507944).
## 3.1. Demographics
The HCWs were significantly older than the hairdressers (45.9 vs. 33.4 years $$p \leq 0.001$$) (Table 1). Otherwise, there were no significant demographic differences between the two groups.
## 3.2. Diurnal Variation of FeNO
Figure 1a,b shows the diurnal variations in FeNO during week 38 (19–24 September 2022) and week 39 (26–30 September 2022) among hairdressers and HCWs. No short- or intermediate-term increase in FeNO was detected during the weeks.
## 3.3. Air Quality Levels
Figure 2a shows the corresponding average daily air quality levels. The data values are available in Supplementary Materials S2. A decrease in the ozone levels during both weeks corresponds with a decrease in FeNO S (measured after commuting and arriving at the workspace) but not in FeNO E (measured after ≥3 h at work), with a 24-h latency. In Figure 2b, the decrease in ozone and FeNO S in ppb is emphasized. When ozone decreased by $48\%$ or 9.38 ppb (19.73 to 10.35 ppb) in week 38 and $34\%$ or 9.33 ppb (27.75 to 18.42 ppb) in week 39, FeNO S decreased by $19\%$ or 3.25 ppb (17.25 to 14.00 ppb) and $20\%$ or 3.75 ppb (18.46 to 14.71 ppb), respectively. The FeNO values are available in Supplementary Materials S3. When correcting for symptoms of cold, as shown in Supplementary Materials S3, FeNO S decreased by $26\%$ or 4.89 ppb (19.09 to 14.20 ppb) and $12\%$ or 1.98 ppb (16.25 to 14.27 ppb), respectively.
## 3.4. FeNO Measurements, Sampling and Commuting Time
Among hairdressers and HCWs, there was no significant daily increase in FeNO (Table 2). The distribution of the data and normality tests are available in Supplementary Materials S4 and S5.
## 3.5. Symptoms of Respiratory Infections
There were significantly increased FeNO S and FeNO E measurements among the participants that reported cold symptoms ($p \leq 0.001$) (Figure 3). No participants reported fever or shortness of breath.
## 3.6. FeNO, Commuting and Hair Treatments
Figure 4a shows the FeNO measurements related to commuting by car, public transport, bicycle, and as pedestrians. The data values are available in Supplement Materials S6. There was a significant increase in FeNO S and FeNO E among those who reported commuting as pedestrians ($$p \leq 0.026$$ and $$p \leq 0.040$$). Figure 4b shows the FeNO measurements in relation to the hair treatments that were performed by hairdressers, including bleaching, dyeing, the use of hair spray and other treatments (mostly nails). The data values are available in Supplementary Materials S6. Among those who reported that they had performed bleaching and dyeing, there was a significant decrease in FeNO S ($$p \leq 0.007$$ and $$p \leq 0.009$$) and FeNO E ($$p \leq 0.014$$ and $$p \leq 0.007$$).
## 4. Discussion
We did not detect any short- or intermediate-term increases in FeNO corresponding to occupational exposures among the hairdressers. All of the hair salons had good ventilation systems. Hairdressers performing bleaching and dyeing had the lowest FeNO levels. The cause is unclear, although the non-exposed groups had higher than normal levels of FeNO. As expected, symptoms of cold significantly increased FeNO. FeNO was slightly increased among those who commuted as pedestrians for 5 min or more. They may also have been more exposed to air pollution than other commuters. However, FeNO did not increase among those who bicycled, although this was a small group. Interestingly there were large decreases in FeNO after commuting on different weekdays for both hairdressers (week 38, Wednesday to Thursday) and HCWs (week 39, Thursday to Friday). When compared with air pollution levels, large decreases were also present for ozone during both weeks, although one day before the decrease in FeNO. The decrease in ozone was 35–$50\%$, whereas the decrease in FeNO was close to $20\%$, which is above our repeatability analysis and the NIOX VERO© manufacturer precision data.
Several studies have investigated FeNO and occupational exposures over the last 20 years with varying results, showing elevated values in studies focusing on spray painters, underground tunnel workers and welders, among others [15,20,21]. Daily increases in FeNO, up to $40\%$, in shoe and leather makers were found in one study [22]. Our findings may explain unexpected variations in FeNO measurements. Welding, spray-painting, and shoemaking, which produce ozone or volatile organic compounds (VOCs), an ozone precursor, increase FeNO levels among the exposed workers. These studies share large differences in baseline values of FeNO, ranging from 6 ppb to 25 ppb. A recent systematic review of occupational asthma and FeNO noted that different threshold levels of FeNO made drawing conclusions difficult [23]. Occupational studies have focused on particle matter exposures and chemicals without accounting for environmental effects. It is common in occupational medicine and hygiene to expect exposure in the workplace to be several times, if not magnitudes higher, than ambient environmental levels. For example, in the study related to welders [15], the median of PM2.5 was 604 µg/m3, and the highest air pollution level in Oslo during the two weeks in our study of PM2.5 was 10 µg/m3. The other markers of air pollution during our study were relatively low, and they did not correlate with the FeNO measurements, although NO2 showed some inverse correlation with ozone.
Ozone is different in this regard, as ambient environmental and occupational levels can be in the same range. Ground-level ozone is produced through chemical reactions between solar radiation, nitrogen oxide pollution (NOx) and VOCs [24]. Air pollution levels of ozone can reach more than 100 ppb in polluted areas, whereas welding in occupational settings can reach close to 200 ppb [25]. The Occupational Safety and Health Administration (OSHA) standard for ozone is 100 ppb averaged over eight hours, whereas The World Health Organization (WHO) sets an eight-hours environmental limit of 50 ppb [26,27]. Indoor levels of ozone in offices are about $10\%$ of outdoor levels and may, by emission from printers and photocopiers, increase to 30–$40\%$ of outdoor levels [28].
Ozone and FeNO covariation were only present in the FeNO sampling after commuting and not after ≥3 h of work, which may indicate, although hypothetically, that other forms of exposure, such as chemical, biological, or physical, possibly have short-term effects depending on agent and dose.
A study on twins concluded that environmental contributions accounted for $40\%$ of FeNO variations; the remaining was related to genetics [29]. A community-based population study comparing FeNO measurements and air pollution exposure showed a positive association of ozone in non-asthmatics with a five-day average air pollution of ozone [30]. In a longitudinal study of an elderly population, 12 weeks of weekly FeNO measurements showed a positive correlation between FeNO and five-day average ozone air pollution [31]. However, the effect of exposure to 300 ppb ozone on healthy volunteers for 75 min did not show any increased FeNO at 6 and 24 h post-exposure [32]. Studies on daily variations of FeNO in healthy subjects have found small within-day and between-day changes [19,33]. We have not found any studies in environmental or occupational settings that have performed diurnal FeNO measurements concerning variations.
Epidemiological studies of airway disease and exposure to ozone have not been consistent, possibly because ozone is a secondary air pollutant that is confounded by NOx. However, there is evidence that supports the correlation between exposure to ozone and childhood-onset asthma [34]. A recent, large case–control study showed that exposure to ozone was the only air pollution that was associated with asthma exacerbation requiring hospitalization [35].
A biological mechanism of the pathological effects of ozone is suggested in a mouse model of rhinitis [36]. Lymphoid cell-sufficient mice that were exposed to 500 ppb ozone for 4 h daily up to 9 days developed nasal Type 2 immunity and eosinophilic rhinitis with mucous cell metaplasia, whereas lymphoid cell-deficient mice did not. A marked influx of neutrophils was detected 2 h post-exposure but less after 24 h, and eosinophils dominated after 4 and 9 days. Several animal and human studies have supported an airway remodeling effect by long-time ambient ozone exposure [37,38].
The strengths of our study are that it used the diurnal measurements of workers that were exposed to airway irritants during a whole work week, both before and after occupational exposure, in addition to the consideration of environmental exposures during two work weeks. The limitations are a relatively low number of workers in each occupational group and that it is a relatively short longitudinal study concerning more subtle environmental effects. In addition, except for hair treatments, commuting method, and time, we obtained no data related to indoor and outdoor exposures during leisure time.
## 5. Conclusions
We did not find any short- or intermediate-term increases in FeNO in hairdressers related to work exposure. There was a decrease among workers that were exposed to bleach and dyeing, although the significance of this is unclear. There were variations in FeNO among both hairdressers and HCWs, which may be attributed to environmental ambient ozone levels. These are intriguing findings of clinical, environmental, and occupational importance, which need to be followed up by larger diurnal cohort studies in both respiratory healthy and non-healthy individuals. |
# Mental Health and the COVID-19 Pandemic: Observational Evidence from Malaysia
## Abstract
The interplay of physical, social, and economic factors during the pandemic adversely affected the mental health of healthy people and exacerbated pre-existing mental disorders. This study aimed to determine the impact of the COVID-19 pandemic on the mental health of the general population in Malaysia. A cross-sectional study involving 1246 participants was conducted. A validated questionnaire consisting of the level of knowledge and practice of precautionary behaviors, the Depression, Anxiety, and Stress Scales (DASS), and the World Health Organization Quality of Life—Brief Version (WHOQOL-BREF) was used as an instrument to assess the impacts of the COVID-19 pandemic. Results revealed that most participants possessed a high level of knowledge about COVID-19 and practiced wearing face masks daily as a precautionary measure. The average DASS scores were beyond the mild to moderate cut-off point for all three domains. The present study found that prolonged lockdowns had significantly impacted ($p \leq 0.05$), the mental health of the general population in Malaysia, reducing quality of life during the pandemic. Employment status, financial instability, and low annual incomes appeared to be risk factors ($p \leq 0.05$) contributing to mental distress, while older age played a protective role ($p \leq 0.05$). This is the first large-scale study in Malaysia to assess the impacts of the COVID-19 pandemic on the general population.
## 1. Introduction
Global health is threatened by the ongoing outbreak of the respiratory disease named Coronavirus Disease 2019 (COVID-19) [1]. The disease is caused by a single, positive-strand RNA virus known as SARS-CoV-2, which was initially reported in Wuhan, Hubei Province, China [2]. Transmission of COVID-19 occurs mainly through respiratory droplets, and its estimated basic reproduction number (R0) ranges from 1.5 to 3.5 [3]. Its relatively high infectivity, long incubation period, long viral shedding period, and steadfast spreading to almost all continents led the World Health Organization to declare a pandemic on 12 March 2020 [2]. As of 8 July 2022, WHO reported more than 550 million confirmed COVID-19 cases, including more than 6 million mortalities [4].
Malaysia is the third-highest country for the number of COVID-19 cases and the fourth-highest country for the number of COVID-19 deaths within the Southeast Asian region [5,6]. The Malaysian government implemented a series of quarantine policies to halt the transmission of COVID-19. In the year 2020, there were four phases of Movement Control Order (MCO) from 18 March to 12 May 2020, two phases of Conditional Movement Control Order (CMCO) from 13 May 2020 to 9 June 2020, and three phases of Recovery Movement Control Order from 10 June 2020 to 31 March 2021 [7,8,9]. In mid-2021, Malaysia declared yet another nationwide Full Movement Control Order (FMCO) from 1 June to 28 June amid a surge of daily COVID-19 cases to 8000 [10].
Pandemics were associated with various psychosocial stressors involving oneself and loved ones. People experienced significant disruptions to their daily routines, such as financial incomes [11], restricted outdoor activities [12], sleep cycles [13], dietary patterns [14], and health behaviors [15]. The anxiety of the population heightened due to the uncertain prognosis of COVID-19, the imposition of unfamiliar public health measures [16], severe shortages of medicine and food [17], the loss of finances [18], and conflicting messages from authorities [19]. Those undergoing quarantine experienced stress, irritability, panic, depression, insomnia, fear, confusion, anger, frustration, boredom, and stigmatism [20,21,22,23,24]. Inadvertently, health systems prioritize screenings and control of disease transmissions ahead of managing the mental health and well-being of the population [25,26,27].
The interplay of physical, social, and economic factors during the pandemic adversely affected the mental health of previously healthy people and exacerbated mental conditions for those with pre-existing disorders [28,29]. Phobic anxiety, panic buying, doom scrolling, travelling against movement restriction orders, absconding from treatment facilities, and binge-watching were associated with impairment of self-control, mental exhaustion, sleep, and mood disturbances [30,31,32,33]. Recent studies reported increased addictive disorders during the COVID-19 quarantine, such as internet addiction, online gambling, pornography, alcoholism, or drug misuse among the general population [34,35,36]. Home isolation restricted family members to their residences, aggravated household conflicts, and increased domestic violence and child maltreatment [37,38,39,40]. Meanwhile, survivors of COVID-19 experienced post-traumatic stress disorder (PTSD) with disproportionately elevated symptoms among those requiring inpatient admission, ventilation support, and treatment for pre-existing mental disorders [41,42].
Although movement control orders were necessary to curb the transmission of COVID-19, their prolonged and repetitive impositions were detrimental. These hostile experiences caused the country to endure financial stress [43,44], social disorders [45], and emotional disorders [46], which inevitably spiked cases of suicide attempts and depression [47]. Notwithstanding the severe mental impacts on Malaysians, studies have remained limited to healthcare professionals and university students, thus, neglecting the true implications of COVID-19 on the entire population [48,49,50]. With that, this study aims to determine the interplay of associations between COVID-19 knowledge, precautionary measures, mental health, and quality of life among Malaysians. It is hypothesized that the COVID-19 pandemic has caused negative impacts on mental health as well as quality of life among Malaysians. These findings are pertinent for the timely intervention of dysfunctional processes and maladaptive lifestyles that may result in the onset of psychiatric conditions [51].
## 2.1. Study Design
This cross-sectional study was conducted from 1 January 2021 to 31 December 2021. The study was conducted in full compliance with the principles outlined in the Declaration of Helsinki and Malaysia’s Good Clinical Practice [52]. Participant recruitment was done via convenient sampling, and the survey was conducted online using Google Forms. The inclusion criteria were: [1] being aged 18 and above; [2] residing in Malaysia for more than 12 months; and [3] being willing to give informed consent. On the other hand, exclusion criteria were: [1] underlying mental illness; [2] active infection with COVID-19; and [3] healthcare workers. The eligibility of each participant was confirmed according to the protocol checklist, and their written informed consent was obtained. The study was approved by the principal investigator’s institutional ethics committee (UCSI University, Malaysia, approval code IEC-2020-FMHS-046).
## 2.2. Knowledge about COVID-19
A validated questionnaire developed by Zhong and colleagues was modified slightly for use in assessing participants’ understanding of COVID-19 [53]. The questionnaire consisted of twelve questions: four on clinical presentations, three on transmission routes, and five on prevention and control. These questions were provided with three options, namely “Yes”, “No”, and “I don’t know”. A correct answer was given 1 point, and an incorrect/not knowing answer was given 0 points. Total knowledge scores ranged from 0 to 12, with 0 to 4 points denoting low levels of knowledge, 5 to 8 points denoting moderate levels of knowledge, and 9 to 12 points denoting high levels of knowledge. The questionnaire was validated by the National Health Commission of the People’s Republic of China, indicating acceptable reliability with a Cronbach’s alpha coefficient of 0.71 [53].
## 2.3. Precautionary Behaviors
A modified version of the validated questionnaire developed by Leung and his colleagues assessed participants’ precautionary behaviors [54]. The original questionnaire was designed to assess the overall well-being and practices during SARS outbreaks in Hong Kong. In this study, only the precautionary measures section, which consists of seven questions, was used.
## 2.4. Depression, Anxiety and Stress Scales (DASS)
The validated Depression, Anxiety, and Stress Scales (DASS) were used to assess self-perceived mental distress [55]. DASS-21 is a self-report questionnaire that contains twenty-one questions, seven per subscale of depression, anxiety, and stress. Participants rated each question on a scale of 0 (did not apply to me at all) to 3 (applied to me very much or most of the time). Sum scores were computed by summing up scores within the same subscale and multiplying them by a factor of 2. The cut-off scores for the depression, anxiety, and stress subscales were 21, 15, and 26, respectively; thus, scores above these denoted high severity of mental distress [56]. DASS was previously validated for Malaysians with a Cronbach’s alpha coefficient of at least 0.74, indicating acceptable reliability [57].
## 2.5. World Health Organization Quality of Life—Brief Version (WHOQOL-BREF)
The validated World Health Organization Quality of Life—Brief Version (WHOQOL-BREF) was adopted to assess the quality of life amid the COVID-19 pandemic [58]. A WHOQOL-BREF assessment is a short-form questionnaire that determines the meaning of different aspects of life to the participants and their satisfaction with their experiences concerning those aspects of life. It is a self-perceived questionnaire consisting of four domains, namely physical health (seven items), psychological status (six items), social relationships (three items), and environmental conditions (eight items). Participants were asked to rate all the items on a Likert scale of 1 to 5 (1 = very poor, 2 = poor, 3 = neither poor nor good, 4 = good, and 5 = very good; 1 = very dissatisfied, 2 = dissatisfied, 3 = neither satisfied nor dissatisfied, 4 = satisfied, and 5 = very satisfied; 1 = not at all, 2 = a little, 3 = a moderate amount, 4 = very much, and 5 = an extreme amount; 1 = not at all, 2 = a little, 3 = moderately, 4 = mostly, and 5 = completely; 1 = not at all, 2 = a little, 3 = a moderate amount, 4 = very much, and 5 = extremely; or 1 = never, 2 = seldom, 3 = quite often, 4 = very often, and 5 = always). Items with negative scoring were reversed when summing up the total domain score. After that, it was converted to a transformed score within the range of 4 to 20. Domain scores were scaled positively, with a higher score denoting better QoL. WHOQOL-BREF was previously validated for Malaysians with a Cronbach’s alpha coefficient of 0.88, indicating good reliability [59].
## 2.6. Statistical Analysis
Categorical data were expressed in frequency and percentage, while continuous data were presented as the mean ± SD for normally distributed data or the median (interquartile range) for non-normally distributed data. Where appropriate, the association relationship was analyzed using an independent samples t-test, one-way analysis of variance (ANOVA), or a Chi-square test. Correlation analyses (Pearson’s) were performed to assess the predicted relationships between demographic, DASS, and WHOQOL-BREF outcome measures. Pearson coefficients) range from +1 to −1, with +1 representing a positive correlation, −1 representing a negative correlation, and 0 representing no relationship. Results are considered significant if $p \leq 0.05.$ *Statistical analysis* was performed using SPSS 26.0 (IBM Corp., New York, NY, USA) for macOS.
## 3.1. Characteristics of Participants
Of the 1246 participants who enrolled in this study, the majority ($$n = 506$$, $40.6\%$) were below or at the age of 30 at the time of study entry. Female participants ($$n = 675$$, $54.2\%$) were slightly more numerous than their male counterparts. The highest educational levels were at the pre-university and graduate levels, with $32.7\%$ and $35.0\%$, respectively. Annual incomes observed a normal distribution, with the majority ($$n = 629$$, $50.5\%$) earning USD 10,000 to USD 20,000 in a year. Financial struggles were similar between groups. Most participants ($$n = 350$$, $28.1\%$) do not have any dependents living with them, followed by having two dependents ($$n = 279$$, $22.4\%$), and lastly having three, one, and more than three dependents with $18.9\%$, $16.3\%$, and $14.3\%$, respectively. Meanwhile, some participants ($$n = 150$$, or $12.0\%$) suffered from chronic diseases. A history of being positive for COVID-19 or being a close contact was similar between groups. The factors analyzed are normally distributed, with no significant difference between categorical variables except employment status and chronic diseases (Table 1).
## 3.2. Level of Knowledge, Precautionary Behavior, Depression, Anxiety, and Stress Scales (DASS), and Quality of Life (WHOQOL-BREF) of Participants
Most participants ($$n = 1097$$, $88.0\%$) showed a high level of knowledge about infectious diseases, and none had a low level of knowledge. Precautionary measures were similar for nearly all assessed behaviors, except for face mask-wearing, which was practiced by $81.5\%$ of participants. The means (SD) of depression, anxiety, and stress were 13.7 (8.9), 13.0 (8.6), and 14.6 (8.5), respectively. With regards to severity, $69.7\%$ had depressive symptomatology ($13.1\%$ were severe and $7.9\%$ were extremely severe), $72.6\%$ had anxiety symptoms ($11.5\%$ were severe and $24.3\%$ were extremely severe), and $42.6\%$ had stress symptoms ($11.4\%$ were wsevere and 1.4ere % extremely severe). Meanwhile, the means (SD) of physical health and psychological status were 13.0 (2.6) and 12.9 (2.6), respectively, and those for social relationships and environmental conditions were 13.5 (3.2) and 13.4 (2.4), respectively (Table 2).
## 3.3. Analysis of Association
Age, educational level, employment status, and annual incomes were found to be significantly ($p \leq 0.05$) associated with all DASS symptoms and QOL domains, with higher impacts on the groups of 31 to 40 years old and 41 to 50 years old (similarly high), secondary educational level, part-timer, and annual income group of less than USD 10,000. Gender was significantly ($p \leq 0.05$) associated with depression, anxiety, social relationships, and environmental conditions, which impacted male participants mainly. Financial struggle was significantly ($p \leq 0.05$) associated with anxiety and all QOL domains. Participants with one dependent were also significantly ($p \leq 0.05$) associated with all DASS symptoms and QOL domains, except for the environmental condition domain. A history of chronic diseases was significantly ($p \leq 0.05$) associated with depression, anxiety, and social relationships. In contrast, the history of being positive for COVID-19 positive or being a close contact was significantly ($p \leq 0.05$) associated with anxiety and stress. In addition, results indicated that participants with a moderate level of knowledge were significantly ($p \leq 0.05$) more impacted in terms of stress, physical health, and environmental conditions (Table 3).
## 3.4. Correlation of Coefficients
Table 4 shows the Pearson correlation coefficient matrix of the observed variables. Age was inversely correlated with knowledge (r = −0.070, $p \leq 0.05$), depression (r = −0.116, $p \leq 0.001$), anxiety (r = −0.083, $p \leq 0.01$), and stress (r = −0.081, $p \leq 0.01$), and directly correlated with physical health ($r = 0.102$, $p \leq 0.001$), psychological status ($r = 0.089$, $p \leq 0.01$), social relationships ($r = 0.068$, $p \leq 0.01$), and environmental conditions ($r = 0.063$, $p \leq 0.05$). The level of knowledge was found to significantly correlate ($p \leq 0.05$) with anxiety ($r = 0.064$) directly and environmental conditions (r = −0.073) inversely.
Depression showed a strong direct correlation with anxiety ($r = 0.756$, $p \leq 0.001$) and stress ($r = 0.748$, $p \leq 0.001$), and a moderate inverse correlation with physical health (r = −0.505, $p \leq 0.001$), psychological status (r = −0.493, $p \leq 0.001$), social relationships (r = −0.431, $p \leq 0.001$), and environmental conditions (r = −0.419, $p \leq 0.001$). Anxiety showed a strong direct correlation with stress ($r = 0.740$, $p \leq 0.001$) and a moderate inverse correlation with physical health (r = −0.471, $p \leq 0.001$), psychological status (r = −0.438, $p \leq 0.001$), social relationships (r = −0.405, $p \leq 0.001$), and environmental conditions (r = −0.459, $p \leq 0.001$). Stress showed a moderate inverse correlation with physical health (r = −0.476, $p \leq 0.001$), psychological status (r = −0.475, $p \leq 0.001$), social relationships (r = −0.409, $p \leq 0.001$), and environmental conditions (r = −0.438, $p \leq 0.001$).
Meanwhile, the physical health domain of WHOQOL-BREF showed a moderate direct correlation with psychological status ($r = 0.568$, $p \leq 0.001$), social relationships ($r = 0.409$, $p \leq 0.001$), and environmental conditions ($r = 0.557$, $p \leq 0.001$). Psychological status showed a moderate direct correlation with social relationships ($r = 0.403$, $p \leq 0.001$) and environmental conditions ($r = 0.524$, $p \leq 0.001$). Lastly, social relationships were moderately and directly correlated with environmental conditions ($r = 0.489$, $p \leq 0.001$).
## 4. Discussion
The widespread morbidity and mortality associated with the COVID-19 pandemic have profoundly affected every individual’s life since the declaration of the novel coronavirus disease 2019 as an international public health emergency in January 2020 [60]. In order to limit the spread of COVID-19 and curb the drastic increase in mortality, the World Health Organization [4] recommended the implementation of Public Health and Social Measures (PHSM), such as imposing lockdown by country, state, or district on a global scale [61]. Pursuant to that, Malaysia has declared two total nationwide lockdowns during the long pandemic [62]. Prolonged lockdowns have caused inevitable changes to the usual activities, livelihoods, and routines of people, eventually leading to deteriorated mental health and increased self-harm or suicidal behavior [63]. Recent studies have pointed out that self-isolation, quarantine, spatial distancing, misleading social media content, and social and economic discord are major contributing factors to anxiety, stress, helplessness, loneliness, and depression [64,65]. Quality of life was simultaneously impacted in the general population and in post-COVID-19 patients [66,67]. Malaysia was ranked fourth in the Event Scale-Revised (IES-R) and Depression, Anxiety, and Stress Scale (DASS-21) to be impacted by the COVID-19 pandemic among seven middle-income countries in Asia [68].
The knowledge level of participants about COVID-19 was assessed using a questionnaire developed during the first outbreak in Wuhan, China [53]. The questionnaire was adopted in this study with further grouping into low, moderate, and high levels of knowledge. Results revealed that most participants ($$n = 1097$$, $88.0\%$) possessed a high level of knowledge after approximately two years of battling COVID-19. This finding is supported by a recent study that highlighted the direct proportional relationship between time of media exposure and perceived knowledge among the general public [69]. Prolonged lockdown periods in Malaysia have led to high dependency on various online sources to acquire updated information about the pandemic [70]. Notwithstanding the high level of COVID-19 knowledge among our participants in this study, only half were practicing precautionary measures such as covering their mouth when coughing or sneezing, using serving utensils, practicing good hygiene, or social distancing. These lax precautionary measures could be attributed to the central government’s lack of firm, persistent, and consistent enforcement. Although social distancing was strongly imposed during the beginning of the pandemic, it lacked endurance and was promptly eased following the decline in COVID-19 positive cases, increased occupancy in intensive care units (ICU), and decreased R0 value. Eventually, the public will lose sight of the need for social distancing and preventive measures. Similar observations were reported in India following its first wave of COVID-19 cases [71]. Second, high mask-wearing compliance could reduce adherence to social distancing, as indicated by our results. This observation can be attributed to a mechanism termed risk compensation behavior, in which individuals embrace higher risk when their safety is presumed [72].
Our results indicated the participants’ average scores for depression, anxiety, and stress to be 13.7, 13.0, and 14.6, respectively; these values were higher compared to the data reported in the most recent study [68]. The sudden hike in DASS scoring is most likely due to the prolonged lockdown implemented in 2021. Quarantine and isolation at extended lengths were deemed highly effective countermeasures for the transmission of COVID-19, but they inevitably impacted individuals’ mental health, especially their emotional well-being [73]. Growing evidence supports the negative impacts of quarantine in causing psychological distress in the form of anxiety, depression, worry, anger, confusion, and post-traumatic stress symptoms [47,73,74]. Apart from the long lockdown period, our data illustrated the potential attributions by age, gender, educational level, employment status, annual income, number of dependents, medical background of chronic illnesses, and history of being COVID-19 positive or in close contact. This is consistent with the previous findings reported in Asian countries [68,75,76,77]. Although individuals of older age (60+ years) were thought to have a greater risk of contracting and dying from COVID-19, a study has shown that this group possessed better emotional well-being, which acted as a buffer against the negative psychological impacts of COVID-19 [78]. Contrastingly, individuals younger than 50 were reported to have a more evident association with adverse mental health. This suggested that stress arising from financial insecurity is an essential risk factor for psychological morbidity, especially for those working adults between 31 and 50 years old, as observed in our study [79,80]. The faltering economy and reduction in business activities during the pandemic had a detrimental effect on workers with low income and unstable employment statuses [75]. A recent model suggested that unemployment caused by the pandemic could result in an additional 9570 suicides per year worldwide [81].
Quality of life was defined as an individual’s perception of their life status in the context of the culture and value system in which they live and concerning their goals, expectations, standards, and concerns [82]. WHOQOL was employed as a multidimensional tool to assess QoL in different aspects of life and was validated to be a useful assessment tool even in different cultural populations [83]. The average scores were 13.0, 12.9, 13.5, and 13.4 for the respective physical, psychological, social, and environmental domains. Although this scale has no cut-off score, our reported values were generally lower than previous studies focusing on specific groups (students, healthcare workers) or a specific timeframe (the first lockdown) during the beginning of the pandemic in Malaysia [84,85,86]. The predictors of QoL were age, educational level, employment status, annual income, and financial struggles for all four domains. Meanwhile, gender was accountable for social and environmental domains, and chronic disease was for social domains. Like mental health, older age appeared to be a protective factor, even though the elderly were classified as a high-risk population during the pandemic. This could be attributed to their financial stability [87], optimism, or reduced fear of death [88]. Our findings were in line with previous studies reporting older age to exhibit a similar or even better well-being status than before the pandemic [87,89,90,91,92]. As highlighted in the earlier study, older people may have better psychological strengths acquired from their life-challenging experiences, equipping them with skills to deal with adversity [93]. Apart from age and financial stability, chronic diseases were also reported to be one significant variable in determining QoL [94]. Some studies have shown that QoL is lower among patients with specific chronic non-communicable diseases (NCD) such as diabetes, hypertension, and cardiovascular disease [95,96]. Due to the fear of COVID-19 infection, populations with chronic diseases often refrain from social interactions, thus lowering their QoL in the social domain [97].
Correlation analysis revealed that age correlated negatively with knowledge, depression, anxiety, and stress, while it correlated positively with all four domains of WHOQOL. This is consistent with our speculation that information about COVID-19 was mainly acquired through social media. Older people were particularly hesitant to utilize digital services due to their refusal to learn new technologies [98]. The digital competency gap between younger and older adults is reasonably large, especially in developing countries [99]. Nonetheless, minimizing the use of social media in acquiring COVID-19 information is beneficial for reducing the symptoms of depression and anxiety [100,101]. The unverified and contradictory information on social media often caused more confusion than consolidating a consistent effort against the pandemic [101]. This study also explained the negative correlation between age and mental distress. The better QoL presented in the older population could potentially be attributed to their greater tolerance to COVID-19 risk, better sleep quality, higher optimism, and better relaxation during the pandemic [102]. The traits of depression, anxiety, and stress showed moderate negative correlations with all four domains of WHOQOL in this study. These findings concurred with previous studies reporting mental distress as a useful predictor for QoL outcomes during the pandemic [103,104,105,106]. One study highlighted that anxiety could be useful to encourage the practice of precautionary measures, but it may disrupt daily work and family life if improperly managed.
Although the pandemic is ending, previous frequent and prolonged lockdowns have caused inevitable changes for everyone. This present study indicated that prolonged lockdowns had profoundly impacted the mental health of the general population in Malaysia, reducing their quality of life during the pandemic. Employment status, financial instability, and low annual incomes appeared to be the risk factors contributing to mental distress, while older age played a protective role in contrast. To our best knowledge, this is the first large-scale study in Malaysia to assess the mental health and quality of life of the public during the pandemic. Our findings shed light on the impact of lockdowns and pandemics in the long run. Preventive measures or intervention programs such as community mental health support programs, awareness and educational campaigns, or suicide prevention programs should be implemented soonest to prevent the exacerbation of pre-existing mental conditions due to the pandemic. The primary limitation of this study is its inability to establish temporal links between outcomes and factors; the base rates of mental health symptoms compared to other time points cannot be inferred through a cross-sectional study. A longitudinal study is recommended to determine long-term mental implications involving all potential risk factors highlighted in this study. |
# Protective Role of Social Networks for the Well-Being of Persons with Disabilities: Results from a State-Wide Cross-Sectional Survey in Kerala, India
## Abstract
The current study presents the findings from a cross-sectional survey on social factors associated with the well-being of persons with disabilities (PWDs) in Kerala, India. We conducted a community-based survey across three geographical zones, North, Central, and South of Kerala state, between April and September 2021. We randomly selected two districts from each zone using a stratified sample method, followed by one local self-government from each of these six districts. Community health professionals identified individuals with disabilities, and researchers collected information on their social networks, service accessibility, well-being, and mental health. Overall, 244 ($54.2\%$) participants had a physical disability, while 107 ($23.78\%$) had an intellectual disability. The mean well-being score was 12.9 (S.$D = 4.9$, range = 5–20). Overall, 216 ($48\%$) had poor social networks, 247 ($55\%$) had issues regarding service accessibility, and 147 ($33\%$) had depressive symptoms. Among the PWDs with issues with service access, $55\%$ had limited social networks. A regression analysis revealed that social networks ($b = 2.30$, $$p \leq 0.000$$) and service accessibility (b = −2.09, $$p \leq 0.000$$) were associated with well-being. Social networks are more important than financial assistance because they facilitate better access to psycho-socioeconomic resources, a prerequisite for well-being.
## 1.1. People with Disabilities in Kerala, India
Disabilities and related complications, irrespective of their types, pose severe challenges across the globe. Globally, more than $15\%$ of people live with a disability, and the prevalence is significantly higher among people from low- and middle-income countries than in other developed countries [1]. According to the disability census of Kerala, there are 793,937 people with disabilities in Kerala, which accounts for $2.32\%$ of the total population [2], where the national average is $2.21\%$ [3]. Among the different types of disabilities in Kerala, locomotor disability is the most common type, accounting for $31\%$ of the total PWDs, followed by multiple disabilities accounting for $17\%$, and mental illness ($12\%$). Vision and hearing impairment accounts for $7.8\%$ and $7.6\%$, respectively. Further, $46.63\%$ of PWDs in Kerala are living below the poverty line [2]. In low- and middle-income countries like India, the rapid increase in disability incidence and severity has not been accompanied by planned initiatives to enhance their well-being and overall health [4]. Due to various systemic barriers, the meager welfare services and programs already available are only accessed by a small proportion of people [5,6].
## 1.2. Social Networks of People with Disabilities
People with disabilities generally experience low levels of social integration and inclusion compared to the general population [7,8] for various reasons, such as functional limitations [9], social stigma, and discrimination [10]. People with disabilities have fewer social contacts and are less likely to begin relationships in everyday life [11], further leading to poorer employment opportunities and health outcomes [12]. Moreover, a study on different social networks among people in Kerala showed that $37.6\%$ of PWDs had a private restricted network type rather than a locally integrated one [13]. All of these reasons clubbed together can cause an increased risk of social isolation among this already vulnerable group [14,15]. Further, people with disabilities have been found to have higher odds of depression and anxiety levels [13], and the personal and health characteristics of PWDs have been found to be mediated by social cohesion in Kerala [16].
In countries like India, where resource scarcity weakens social security nets, family members and neighbors should play a crucial role in the care and support of PWDs [17]. Neighborhood connectivity has the potential to provide knowledge from network members about locally accessible formal and informal resources, effective interventions, health behaviors, and employment opportunities [18]. PWDs create their networks based on employment, routine activities, family connections [19,20], and neighborhood interactions [21,22]. In unequal societies with weak safety nets, this networking is vital for learning about available resources, preventing the loss of existing services, lobbying for additional welfare measures, ensuring greater access to resources locally [23], and creating more growth opportunities. The existing evidence shows that more cohesive societies cooperate in providing welfare services to meet the needs of PWDs, mainly through resource mobilizations at the societal level [24]. Moreover, PWDs feel identified with a group or neighborhood that accepts and is compassionate towards them, which increases their social status [24], and, consequently, their mental health [25].
Family and neighborhood are the best sources of support for PWDs, given the scarcity of social support measures and the overall collectivist nature of Indian societies. However, there is a dearth of evidence about the specific social factors associated with the well-being of those with disabilities. We assume that developing a sense of connectedness and inclusion would play a pivotal role in enhancing their well-being which would moderate the negative impact of disabilities. The findings of this study will help practitioners and policymakers in India to devise strategies focused on strengthening social networking and neighbourhood connectivity to enhance the well-being of these people.
## 2.1. Design
We conducted a cross-sectional, community-based study of PWDs across three geographical zones—North, Central, and South of Kerala state, India—between April and September 2021. Kasaragod, Wayanad, Kannur, Kozhikode, and Malappuram districts make up the Northern zone. The Central zone consists of four districts: Palakkad, Thrissur, Ernakulam, and Idukki. The Southern zone includes Trivandrum, Kollam, Pathanamthitta, Alappuzha, and Kottayam districts. We randomly selected six districts from these three zones (two from each) using a stratified sampling method, followed by selecting one local self-government (LSG) body from each of these six districts. The local self-government bodies are administrative divisions within each district that function as sub-units of each district. The LSGs include municipalities or corporations (sub-units in urban areas) and panchayats (rural areas). We randomly selected two units from urban areas (one corporation and one municipality). Four Panchayats (more panchayats were included to ensure better representation. ( The Kerala state has 941 grama panchayats, 87 municipalities, and 6 corporations). Accredited Social Health Activists (ASHAs), who have an advantage due to their domicile, helped identify people with disabilities. After listing the names of the PWDs who had been identified, researchers made home visits until they had 75 consenting PWDs (or, in the case of children or those with severe disabilities, their carers) from each selected local self-government. Figure 1 describes the participant recruitment procedures of the current study.
## 2.2. Participant Recruitment
We recruited PWDs and their caregivers from the community through a multistage recruitment procedure. The researchers included the PWDs residing in the targeted location who consented to participate. We included people within the four major disability categories, including physical disability, intellectual disability, multiple disabilities, and other forms of disabilities. A random number technique was employed to identify 75 PWDs from each district and recruit a total of 450 participants for the current study.
## 2.3.1. Outcome Variable
The primary outcome measure of well-being was measured by the WHO Well-Being Index [21], which is a set of five questions measured on a Likert scale with response options of “all of the time” [5], “most of the time [4], “more than half of the time” [3], “less than half of the time” [2], “some of the time” [1] and “at no time” [0]. The scores ranged between 0 and 25, and a higher score indicated better well-being. The tool has been validated and found to have good reliability coefficients [26].
## 2.3.2. Exposure Variables
Sociodemographic variables, mental health, well-being, and access to services were the major exposure variables measured in the current study. Sociodemographic variables included age, gender, education, marital status, employment, the color of the ration card, the type(s) of disability, and the percentage level of disability. Age was ascertained in years and was later grouped into four categories: children (0–18 years), young adults (19–39 years), middle adulthood (40–59 years), and elderly (above 60 years). Education was measured in five categories: not literate, literate but did not complete primary education, completed primary education (10th grade), completed secondary education (12th grade), and completed tertiary and above (graduation, diploma, or post-graduation). Marital status was ascertained in four categories: currently married, never married, widowed, and divorced/separated. Occupational details were measured as “employed”, “unemployed”, “student”, or “completely dependent”. A ration card is an official document, issued by the state government, that describes the eligibility to purchase subsidized food grains from the government distribution system. The colors, coded as yellow, pink, blue, and white, describe the socio-economic status of each household. The yellow and pink cards are for households below the poverty line, while the blue and white cardholders fall above the poverty line. The types of disabilities were categorized into four areas: physical disability, including locomotor disability, vision, hearing, and speech impairment; intellectual disability, including mental retardation and autism; multiple disabilities; and other forms of disabilities, which included disabilities due to a chronic neurological condition, Parkinson’s, or mental illness. The percentage of disability is ascertained from the disability certificate issued by the Government of India.
Mental health was measured using the DASS 21 (Depression, Anxiety, and Stress) Scale [27]. It includes 21 self-reported questions rated on a four-point scale (0–3), with “0” denoting “did not apply to me at all” and “3” meaning “applied to me very much, or most of the time”. The DASS 21 is a reliable and valid tool to measure mental health among adults [28].
Access to services was measured using a set of self-reported questions based on accessibility in four major areas: family income/employment, essential services, health care, and mental health care. Accessibility was rated on a four-point scale (1–4), with “1” denoting “as much as I need”, “2” representing “most times”, “3” indicating “sometimes,” and “4” meaning “not at all”. We also asked self-reported questions about barriers to accessing care in four major areas: awareness, absence of services, lack of support, and transportation, to which the participants replied using binary response options of “yes” [1], denoting the presence of the barrier, and “no” [0], indicating an absence.
Social networks were measured using a set of self-reported questions about the level of contact and support received from families, friends, and neighbors. The questions were measured on a four-point Likert scale (0–3), with 0 denoting “at no time”, 1 denoting “sometimes,” 2 denoting “most times”, and 3 denoting “at all times”. Based on median scores, they were classified as people with poor social networks and people with adequate social networks for analysis purposes.
## 2.4. Data Analysis
We performed descriptive statistics to profile the PWDs concerning their geographical locations and other demographic variables. We calculated frequencies and percentages through two-way tables to find differences between the subgroups of interest. Further, Chi-square tests were used to determine the statistical difference between the variables. Linear regression was performed to identify the various factors associated with well-being among people with disability. The level of statistical significance was set at $p \leq 0.05.$ All statistical analyses were performed in IBM SPSS 26 package (New York, NY, USA) and STATA (StataCorp LLC Version 15, Lakeway Drive, TX, USA).
## 2.5. Ethical Considerations
We obtained ethical committee approval from the institution’s Institutional Review Board (Ref. No. – RCSS/IEC/$\frac{002}{2021}$, dated 15 January 2021). We obtained informed written consent from participants and their caregivers before inclusion. We also explained the voluntary nature of participation and the right to withdraw at any data collection stage.
## 3.1. Demographic Characteristics
The study included data from 450 respondents (Table 1), the majority of whom were males ($62\%$). More than $65\%$ of the respondents were in the early/or middle adulthood stage, and $12.9\%$ were elderly. Overall, $72\%$ of the respondents had completed primary education, while $3\%$ were uneducated/illiterate. Further, $63\%$ of the respondents were unmarried, $54\%$ were unemployed/entirely dependent on family members, and $71\%$ were below the poverty line. Of the types of disability, $54.2\%$ had a physical disability, which included a multitude of disabilities related to vision, hearing, speech, or locomotor functioning, and $24\%$ of the population had intellectual disabilities.
The mean well-being score for the study population was 12.9 (±4.9). There was no significant difference in well-being scores within the demographic variables studied. However, the scores were slightly higher for children, females, people who completed secondary or tertiary education, and people with less than $40\%$ disability. Summative scores for depression, anxiety, and stress, measured by the DASS scale in the current study group, were 6.62 (6.3), 9.3 (8.7), and 8.2 (7.7), respectively. Further, 147 ($32.7\%$) of PWDs had mild or above depression, 93 ($20.7\%$) had mild or above anxiety, and 278 ($61.8\%$) had mild or above stress.
Demographically, PWDs without formal education existed at the highest rates in the northern zone of Kerala, while unemployment among PWDs was the highest in the southern zone. The summative scores of well-being were the highest among PWDs in the south zone (mean = 14.2), followed by the north (mean = 13) and the central (mean = 11.7) zones. Mental illness, in terms of depression, anxiety, and stress, was the highest among PWDs in the central zone. Social support from neighbors and family members was comparatively higher in the southern zone than in others.
## 3.2. Service Accessibility
We studied access to income/employment, food, medical health care, and mental health care to study the service accessibility among PWDs. In the current population, there were many ($40\%$) who could not access income-generating employment or medical services ($25\%$). In contrast, most had access to essential services ($94\%$), and $86\%$ had access to mental health treatment. Service access in all areas was comparatively higher among males. Furthermore, access to income and essential services was relatively higher among PWDs residing in the southern parts of Kerala. In comparison, access to treatment was better in the northern parts compared to other zones (Table 2).
Of the 450 participants, 203 ($45.11\%$) PWDs had no issue accessing services. However, among 247 people with service access issues, 149 ($33.11\%$) had trouble accessing one service, 64 ($14.22\%$) had issues with two services, 27 ($6\%$) PWDs with three services, and 7 ($1.56\%$) PWDs with all the services listed. Among the 247 PWDs with service access issues, $55\%$ had limited social networks. However, among people with adequate service access, $60\%$ had adequate social networks.
Table 3 describes the subgroup analysis of the accessibility variables with social networks and the types of disabilities. Inadequacies in accessing employment, essential services, medical care, and mental health care were more prevalent in people without adequate support from their families and neighborhoods. Overall, $62\%$ of respondents having inadequate employment (statistically significant at $$p \leq 0.000$$), $59\%$ of respondents having insufficient access to food/other essential services, $52\%$ of respondents having inadequate medical health care, and $54\%$ of respondents having poor access to mental health care had lower social network scores.
Table 4 presents the results of a linear regression analysis conducted to understand the association between social networks and well-being among the respondents. In the current study, people with adequate support were found to have 2.3 times higher scores for well-being compared to people with poorer social networks. The inability to access services and the presence of depression, anxiety, and stress symptoms were negatively associated with well-being in the current population.
## 4. Discussion
The current study aimed at identifying the role of social networks and other social factors in improving the well-being of people with disabilities. Demographically, PWDs with locomotor disabilities were the most common type, and the northern zone of Kerala had the largest percentage of PWDs without a formal education. In contrast, the southern zone had the highest rate of PWDs who were unemployed. Study results point to a comparatively lower number of people accessing services in the central geographical zone of Kerala. Although more of the PWDs in the southern zone were unemployed, those in the northern zone had less schooling. The center zone, which performed well in both of these areas, had poorer levels of well-being and a greater demand for mental health services, especially due to limited access to disability services. This can be explained by the fact that these zones are home to a predominantly urban population with lower neighborhood connectivity and linkages [29]. The current study findings suggest that poor neighborhood connectedness leads to limited access to information, and thereby, to services. This finding is in line with another study conducted in South India [30].
Furthermore, the study findings stressed the importance of family and neighborhood support networks for better well-being and protection against adverse health outcomes in PWDs [31]. People with adequate support networks in the study had higher scores for overall well-being, and this is consistent with studies elsewhere [32,33,34]. This is all the more critical in unequal and stratified, resource-poor societies like India, which is characterized by inadequate safety nets and lean spending on social welfare, such as health care, education, and unemployment insurance [4].
Tapping into local neighborhoods’ physical, social, and service facilities depends on the neighborhood’s culturally defined friendliness/helpfulness and support patterns. There is sufficient evidence to prove that people living in supportive communities require fewer mental health services [35] due to better well-being. The supportive neighborhood disseminates knowledge about self-care and promotes access to locally available services, amenities, and affective support [36]. Through an amalgamation of collective efficacy, social support, and the prevalence of local organizations and voluntary associations, this social connectedness improves access among PWDs [37]. The affective or cognitive closeness with others makes it easier for people to communicate their concerns and gain knowledge about resources [38], especially regarding the non-governmental and volunteer organizations that can address their needs [39]. The participants’ enhanced ability to obtain resources from their networks significantly increases their well-being. Due to their social disconnect, people with disabilities are frequently deprived of opportunities for inclusion, which has an impact on their well-being.
If social networking is created with cultural sensitivity, and in accordance with the current community ecosystem, it can enhance inclusion, social functioning, and resource linkages. The advancement of technology would be another way to increase connectedness and enlarge the borders of the neighborhood. A few digital networking models are worth experimenting with in order to enhance their social inclusion, learn about the resources available, and also advocate for legislation to promote access and social inclusion [40]. This study challenges the current focus of policymakers and practitioners, who emphasize financial support alone as a means of enhancing the well-being of PWDs. Social networks can potentially address the problems with inclusion, accessibility, and emotional requirements, indicating that PWDs are moving up the ladder of Maslow’s hierarchy of needs. This upward trend could be linked to the general economic progress of caring families in conjunction with the nation’s development. The findings encourage policymakers to undergo a paradigm shift in the target areas of intervention strategies. It should be firmly founded in defense of the rights, dignity, and self-worth of people with disabilities from a psycho-socioeconomic perspective as opposed to only an economic one. The current gaps in the care of PWDs could be filled by co-creating social networks, simplifying the linking pathways, and devising customized interventions.
The current study has its limitations as well. Firstly, this being a cross-sectional study, the observed associations cannot be interpreted as causal inferences. The study only included PWDs identified through community health workers and included only the known cases, which can limit the generalizability of the findings. Disability, a complex multidimensional phenomenon, cannot be fully measured quantitatively, which might be another limitation of the current study. However, the study’s findings encourage researchers to investigate PWDs lived experiences, particularly in light of the nation’s evolving psychosocial and economic environment.
## 5. Conclusions
Social networks and support are particularly crucial, as even the already existing formal and informal services and resources for these groups are embedded within the systems of society. The lack of a formal networking platform through which to meet each other in an empathetic environment is a significant barrier to accessing various resources. The well-being of people with disabilities would be improved by developing supportive neighborhood communities and including PWDs through participatory approaches. Social support and social networking take precedence over financial support because they give people a sense of belonging to a community and make it easier for them to obtain information about formal and informal services, eventually enhancing their well-being. The evidence of social networks and connectivity in enhancing the well-being of PWDs compels researchers to devise strategies to scale up the networking by utilizing technological advancements, such as mHealth, social media, and geospatial resource navigation facilities to detect, register, monitor, and link them, further facilitating customized service access to the respective community-dwelling PWDs. |
# Dietary Advanced Glycation End Products and Risk of Overall and Cause-Specific Mortality: Results from the Golestan Cohort Study
## Abstract
Controversy exists regarding the association of dietary advanced glycation end products (dAGEs) with the risk of disease outcomes and mortality. We aimed to examine, prospectively, the association between dAGEs intake and the risk of overall and cause-specific mortality in the Golestan Cohort Study. The cohort was conducted between 2004 and 2008 in Golestan Province (Iran) recruiting 50,045 participants aged 40–75 years. Assessment of dietary intake over the last year was performed at baseline using a 116-item food frequency questionnaire. The dAGEs values for each individual were calculated based on published databases of AGE values of various food items. The main outcome was overall mortality at the time of follow-up (13.5 years). Hazard ratios (HRs) and $95\%$ confidence intervals (CIs) for overall and cause-specific mortality were estimated according to the dAGEs quintiles. During 656, 532 person-years of follow-up, 5406 deaths in men and 4722 deaths in women were reported. Participants at the highest quintile of dAGE had a lower risk of overall mortality (HR: 0.89, $95\%$ CI: 0.84, 0.95), CVD mortality (HR: 0.89, $95\%$ CI: 0.84, 0.95), and death from other causes (HR: 0.89, $95\%$ CI: 0.84, 0.95) compared to those in the first quintile after adjusting for confounders. We found no association of dAGEs with risk of mortality from cancer (all), respiratory and infectious diseases, and injuries. Our findings do not confirm a positive association between dAGEs and the risk of mortality in Iranian adults. There is still no agreement among studies investigating dAGEs and their health-related aspects. So, further high-quality studies are required to clarify this association.
## 1. Introduction
Advanced glycation end products (AGEs) are a diverse group of compounds formed as the end products of spontaneous glycation of amino groups of amino acids through the non-enzymatic Millard reaction [1]. During the heat processing of foods, the Millard reaction occurs when the carbonyl group of reducing sugars interacts with the amino acid of peptides or proteins, resulting in the reversible formation of Schiff base compounds that can promptly undergo molecular rearrangements to so-called Amadori products [2]. The Amadori products are pertinent precursors for AGEs as they can rearrange into AGEs [2]. The Schiff base compounds or the Amadori product precursors can also be degraded into reactive dicarbonyls such as methylglyoxal, glyoxal, and 3-deoxyglucosone. These reactive dicarbonyls can react with a free or bound amino acid and form AGEs [2]. If excessive amounts of AGEs reach tissue and circulation, they become pathogenic [1]. This can occur by consuming a diet containing animal-source foods and cooking processes, in particular roasting, grilling, boiling, and frying, resulting in a further formation of AGEs in foods [3]. Diets with high AGEs content have been associated with cardiovascular diseases (CVD) and metabolic dysfunction [4,5,6].
Similar non-enzymatic reactions, as described above, occur during the normal glycation process of the cell in human tissues to form AGEs, but at lower rates due to the lower physiological temperature [7]. Additional endogenous AGE formation pathways include glycolysis and the polyol pathway. In glycolysis, glyceraldehyde 3-phosphate produced through the general metabolism of glucose or fructose can spontaneously decompose to the reactive dicarbonyl compound methylglyoxal, resulting in AGEs formation [7]. The polyol pathway is active under hyperglycemic conditions and requires glucose conversion to sorbitol and sorbitol conversion to fructose, promoting the accumulation of dicarbonyl compounds and AGEs [7]. Moreover, lipid peroxidation of polyunsaturated fatty acids in cell membranes can also lead to increased dicarbonyl production and subsequent AGE formation [7]. The role of endogenous AGEs in various diseases and conditions, including diabetes and its microvascular complications, neurodegenerative disorders, some cancers, bone diseases, and oxidative stress conditions and chronic inflammation, has been explored [1,8].
Two major mechanisms are attributed to the pathologic effect of AGEs: Firstly, they may conjoin proteins and directly change their structure and consequently their features and function. Secondly, AGEs bind to a specific receptor assigned as the receptor for AGEs (RAGE), which is a multi-ligand receptor and therefore binding of AGE ligands to the receptor can result in stimulation of the proinflammatory transcription factor nuclear factor-kappaB, inducing oxidative stress and inflammatory conditions [9].
The possible effect of dietary AGEs (dAGEs) on human health was previously ignored because it was believed that dietary AGEs are only slightly absorbed [3]. However, experimental studies with diets rich in AGEs have indicated a positive correlation between dAGEs and the body’s AGE pool [10]. A higher intake of dAGEs increased the chance of general and abdominal obesity as the main risk factors for several chronic diseases [11,12]. In a prospective cohort study, higher dAGEs intake increased the risk of breast cancer in postmenopausal women [13]. Consumption of dAGEs promoted the growth of breast and prostate tumor models by forming a tumor-promoting stromal microenvironment [14]. Although some studies have investigated the association of dAGEs and chronic disease mortality in healthy populations, as well as adults with co-morbidities, little is known about the ability of dAGEs for predicting all and cause-specific mortality in a general adult population. To our knowledge, no study has investigated the association of dAGEs intake with the risk of overall mortality in Iran. Therefore, we aimed to examine, prospectively, the association between dAGEs and risk of overall mortality in an Iranian population. The association of dAGEs with the risk of CVD and cancer mortality was also investigated.
## 2.1. Background
We examined data from the Golestan Cohort Study (GCS), a population-based cohort of the general population in the Golestan Province, in Northeast Iran. The design of the GCS has been previously described elsewhere [15]. In summary, the cohort aimed to investigate the incidence of oesophageal squamous cell carcinoma. The study was conducted between 2004 and 2008 in Golestan Province, recruiting 50,045 participants aged 40–75 years, from Gonbad city and 326 rural areas ($20\%$ and $80\%$ from urban and rural areas, respectively). Each participant was provided with an informed consent form before enrollment. Participants were excluded if they had an inaccurate assessment of energy intake, were diagnosed with cancer before the study, missing or inconclusive information on the food frequency questionnaire (FFQ) and/or the general questionnaire (containing information on socio-demographic and socio-economic status, history of diabetes and hypertension, smoking, alcohol drinking, opium use, and anthropometrics), and extreme values of body mass index (BMI). In total, 48,632 individuals were included in our analyses (27,975 women and 20,657 men) (Figure 1). The Institutional Review Boards of the Digestive Disease Research Center (DDRC) of Tehran University of Medical Sciences, the US National Cancer Institute (NCI), and the World Health Organization International Agency for Research on Cancer (IARC) approved the study.
## 2.2. Dietary Assessment
The FFQ from the GCS was used to assess the usual frequency and portion size of dietary intake of 116 food items over the past 12 months. The questionnaire was found reliable and valid [16]. Data on usual portion size, consumption frequency, and servings consumed each time was obtained for each food item at recruitment. Consumption frequency of each food item was questioned according to a daily, weekly, or monthly basis and converted into daily intakes; portion sizes were then converted into grams using household gauges [17,18]. Nutritionist V software and the Iranian Food Composition Table [19] were used to assess daily dietary intake. To estimate the dAGEs intake of different foods including fruits, vegetables, dairies, cereals, meats (white and red meat) and processed meats (sausage, hamburger, salted fish, and smoked fish), and fats, published databases of AGE values of various food items were used to calculate a weighted mean value of dAGE in each FFQ line item [3,20]. We used published databases of the AGE content of commonly consumed foods because there is no information available on AGE values in the Iranian Food Composition Table [19]. In these databases, the AGEs content of 549 food items was measured using a validated immunoassay method [3,20] and data were available for Nε-carboxymethyllysine (CML) as the most-studied AGE in literature. We defined the CML values in kilo-Unit (kU) per 100-g solid food or 100 milliliters of liquid for 84 food items. For each food item, the individual AGE value was calculated by multiplying the assigned CML value by the frequency and portion size (gram value of the respective food item) reported by the individual. The total dAGE value for each participant was then calculated as the sum of the individual AGE values of various food items included in the FFQ. Food items with no similar food available in the databases were considered missing (32 food items) [11]. Because AGEs values were not available for all kinds of fruits, vegetables, and legumes, the mean values of comparable fruits, vegetables, and legumes were considered [11,12].
## 2.3. Measurement of Potential Confounding Variables
All participants were interviewed by instructed clinicians and/or non-clinicians, and data on lifestyle and demographics were obtained using a pre-defined questionnaire. Anthropometrics including weight, height, BMI, and waist-to-hip ratio (WHR) were taken based on the World Health Organization guidelines [15,21]. Physical activity was expressed in the metabolic equivalent of task per minute per week and grouped into tertiles [22]. Wealth score was a proxy of socioeconomic status and was estimated for each participant based on house ownership, structure, size and appliances, family size, etc. [ 23]. Data on wealth scores were then categorized into quartiles. Other potential confounders included age, gender, cigarette smoking, opium use, alcohol drinking, and history of diabetes and hypertension.
## 2.4. Follow-Up and Cause of Death Ascertainment
Follow-up strategies of this cohort study have been detailed elsewhere [15]. In summary, follow-ups were performed every 12 months. The vital status of the participants was obtained through phone calls or home visits by the study group. The overall success rate at the time of follow-up (13.5 years) was $98.9\%$ ($\frac{517}{50}$,045 lost to follow-up). The main outcome was all-cause mortality. Any death report was affirmed by a clinician visit and a complete validated verbal autopsy questionnaire [24]. Moreover, two external internists separately investigated all information regarding the verbal autopsy and medical records and recognized the cause of death. In case of any disagreement between the two specialists, a third, more proficient internist considered all data and made the ultimate decision [15]. For the analyses, major causes of death among the participants were assessed as the secondary outcomes. Analyses were performed only on subjects with affirmed death.
## 2.5. Statistical Analysis
Total dAGE values were categorized into quintiles and the characteristics of participants were compared across the quintiles of dAGE. Analysis of variance (ANOVA) statistical analysis and the χ2 test for continuous and categorical variables were used to compare the characteristics of participants across the quintiles of dAGE. Cox proportional hazard models with follow-up duration as the timescale and dAGE quintiles as the exposure, with the lowest category as the reference, were used to assess the associations between dAGE and risk of overall and cause-specific mortality. In the Cox models, age and multivariate-adjusted hazard ratios (HRs) and $95\%$ confidence intervals (CIs) were provided for each outcome. In the multivariate models, the HRs were adjusted for confounding variables, including age, gender, energy intake, physical activity, pack-years of cigarette smoking, BMI, alcohol drinking, opium use, and history of diabetes and hypertension. The length of follow-up for each participant was considered from the recruitment date to the study until the date of death, lost to follow-up, or the reference follow-up date (30 July 2018), whichever arose first. All the statistical analyses were carried out in SPSS (version 18; SPSS Inc., Chicago, IL, USA) and $p \leq 0.05$ was regarded as significant.
## 3. Results
In total, 48,632 participants were included in our analysis, of which $57.5\%$ were women and $79.7\%$ were inhabitants of rural areas. The average age (standard deviation (SD)) of participants at baseline was 52 (8.9) years. During 13.5 years (3.4) of follow-up, 10,128 deaths were documented ($46.6\%$ women). The main causes of death were cardiovascular diseases [3762], gastrointestinal cancer [966], other cancers [815], respiratory diseases [648], infectious diseases [418], injuries [402], and other causes [1527].
As presented in Table 1, participants at the highest quintile of dAGE values were younger, had higher BMI and WHR, and were more likely to smoke compared with those at the lowest quintile. There were also more alcohol drinkers, and fewer reports of a history of diabetes and hypertension among the participants at the highest quintile of dAGE. Moreover, compared with those at the lowest quintile, participants at the highest quintile of dAGE had higher wealth scores and more energy intake. Calculated total dAGE values for all food items in FFQs ranged from 67.6 to 21,995.9, and the mean dAGE value (SD) of all participants was 7066.7 (2916.8). Participants with higher dAGEs tended to consume more fruits, vegetables, dairy, cereals, meats, and fats (Table 1).
Table 2 presents HRs for all-cause mortality, according to the dAGE quintiles. Participants at the highest quintile of dAGE had a lower risk (age-adjusted) of all-cause mortality (HR: 0.86, $95\%$ CI: 0.81, 0.92) compared to those in the first quintile. Further adjustment for other confounding variables including energy intake, physical activity, smoking, BMI, alcohol drinking, opium usage, and history of diabetes and hypertension did not change the results (Table 2).
Table 3 presents HRs for cause-specific mortality, according to the dAGE quintiles. Participants at the highest quintile of dAGE had a lower risk of CVD mortality (HR: 0.88, $95\%$ CI: 0.79, 0.98) compared to those in the first quintile, and the decreased risk was more evident in women. Adjusted HRs indicated no association of dietary dAGE intake with risk of mortality from all cancer (HR: 0.89, $95\%$ CI: 0.76, 1.03), gastrointestinal (HR: 0.89, $95\%$ CI: 0.72, 1.09), and other cancers (HR: 0.89, $95\%$ CI: 0.71, 1.11). These findings did not differ in sex-specific analyses. Participants at the highest quintile of dAGE had a lower risk of death from other causes (than all cancers, respiratory and infectious diseases, and injuries) (HR: 0.79, $95\%$ CI: 0.67, 0.92) compared to those in the first quintile, and the decreased risk was more evident in men. No association was observed between dAGE quintiles and death from infectious and respiratory diseases and injuries (Table 3).
## 4. Discussion
Examining longitudinal data from the GCS, we did not find dAGEs to be associated with an increased risk of overall and cause-specific mortality. We observed that a higher intake of dAGEs was associated with a reduced risk of overall mortality, CVD mortality, and death from other causes. A gender-specific analysis showed that the highest versus lowest quintiles of dAGEs in men were in association with a $12\%$ and $24\%$ reduced risk of overall mortality and death from other causes, respectively. Compared to the lowest quintile, women at the highest quintile of dAGEs had $9\%$ and $19\%$ lower risk of overall and CVD mortality, respectively.
The findings of the present study are in agreement with the recent study of Nagata et al. [ 25], who showed that a higher intake of CML, a major AGEs product, was inversely associated with the risk of total mortality in Japanese adults. Furthermore, no association was found between dietary intake of AGEs and total and colorectal cancer mortality among colorectal cancer patients in the EPIC (European Prospective Investigation into Cancer and Nutrition) study [26]. Similar findings have been reported when examining the association of serum AGEs and all-cause and CVD mortality [27]. Dissimilarly, adolescents with the highest dAGE intake were more likely to have metabolic syndrome when compared to the lowest quartile of dAGE intake [28]. In a large prospective cohort during the period of 12.8-year follow-up, higher dAGE intake was associated with increased risk of breast cancer in postmenopausal women [13]. Moreover, higher dAGEs have been related to the increased risk of all-cause, and CVD and breast cancer mortality in postmenopausal women diagnosed with invasive breast cancer [29]. In another study, during the follow-up of 10.5 years, men but not women in the fifth quintile of dAGE intake had higher risk of pancreatic cancer [30]. In the course of 13-year follow-up, no significant association was revealed between higher CML intake and the total cancer risk in male and female participants [31]. Yet, CML intake at the highest quartile was associated with the increased risk of liver cancer, while it was associated with the decreased risk of male stomach cancer [31]. One explanation for the contradictory results is the inconsistency of the AGE content of foods or diets used in different studies due to different cooking processes. Besides, population characteristics per se might affect the association as well. The majority of studies were performed on subjects with preexisting medical conditions which could affect the results when compared to healthy adults or the general population.
Controversy exists regarding the toxicity of AGEs in the body. In observational studies, higher dAGEs have been associated with intermediate outcomes such as oxidative stress and inflammation in type 2 DM patients [4]. In subjects with cardio-metabolic diseases such as overweight, obesity, or prediabetes, an AGE-restricted diet reduced some inflammatory markers and improved insulin sensitivity [5]. However, a meta-analysis of clinical trials did not support the effect of AGE-restricted diets on the inflammatory profile of healthy individuals and those with diabetes or renal failure [32]. On the other hand, a positive association of dAGEs with chronic disease outcomes such as breast cancer [13], obesity [12], and chronic kidney disease [33] has been shown. The toxicity effect might originate from the studies in which the dietary content of AGEs is a significant contributor to the excess serum AGEs levels [34]. This toxicity, however, has been debated in the literature [35].
Studies examining the association of dAGEs and total and/or cause-specific mortality are rare, and thus there is no conclusive evidence suggesting dietary AGEs to be detrimental to human health [36]. A major part of the AGE content of foods absorbed is rapidly excreted by kidneys, resulting in insignificant plasma levels of these metabolites [37]. Due to the very rapid excretion of CML from the body, the probability of any effect on body proteins has been considered to be low, and therefore should have only limited consequences in some organs such as the liver and kidneys [37]. Therefore, the effect of dAGEs on human health still needs further elucidation.
Our results showed that higher dAGE values were less protective in men, regarding the association of dAGEs and risk of CVD mortality, compared to female participants. This could be explained by some additional CVD risk factors such as age above 50, smoking, alcohol drinking, and opium use being more frequent in men. On the contrary, compared to the lowest quintile of dAGE, men with higher dAGE values had a lower risk of total mortality and death from respiratory diseases, probably due to the lower BMI and WHR and more physical activity compared to women.
Our study has several strengths including the longitudinal design, the large sample size representing the general population, and a high rate of follow-up. Additionally, we performed our analyses by adjusting for the most relevant confounders. There are some limitations as well. The first was dAGEs values considered for each food item from the beginning. Since there is no AGE value available for any food item in Iranian food composition tables, we used the most commonly studied AGE databases based on diets common in a Northeastern Metropolitan US area [3,20], which might not represent the Iranian foods estimated in this study. Moreover, even for similar food items, the AGE content of food measured in literature might differ from the AGE content of food items in the FFQ used in the present study due to different cooking processes and could, therefore, affect the results. Secondly, we considered the same AGE values for some similar food items such as fruits, legumes, and vegetables for which no respective AGE values were available in the literature. Additionally, some characteristics of subjects might have changed since baseline measurement and would therefore affect the analyses.
In conclusion, our findings indicated an inverse association between dAGEs intake and the risk of overall and cause-specific mortality. Although it has been shown that dietary AGEs are associated with an increased risk of diseases, our findings did not confirm a positive association between dAGEs and mortality in Iranian adults. There is still no agreement among studies investigating dAGEs and their health-related aspects. Evidence has either debated against the adverse effects of dAGEs or revealed a protective effect of an AGE-restricted diet on different health conditions for some specific dAGEs due to antioxidant activity. Yet, studies on healthy subjects are limited and current evidence is indecisive. So, further high-quality studies are required to clarify the impact of dietary AGEs on disease and mortality risk. |
# Clinical Characteristics and Predictors of Long-Term Prognosis of Acute Peripheral Arterial Ischemia Patients Treated Surgically
## Abstract
Background: Acute peripheral arterial ischemia is a rapidly developing loss of perfusion, resulting in ischemic clinical manifestations. This study aimed to assess the incidence of cardiovascular mortality in patients with acute peripheral arterial ischemia and either atrial fibrillation (AF) or sinus rhythm (SR). Methods: This observational study involved patients with acute peripheral ischemia treated surgically. Patients were followed-up to assess cardiovascular mortality and its predictors. Results: The study group included 200 patients with acute peripheral arterial ischemia and either AF ($$n = 67$$) or SR ($$n = 133$$). No cardiovascular mortality differences between the AF and SR groups were observed. AF patients who died of cardiovascular causes had a higher prevalence of peripheral arterial disease ($58.3\%$ vs. $31.6\%$, $$p \leq 0.048$$) and hypercholesterolemia ($31.2\%$ vs. $5.3\%$, $$p \leq 0.028$$) than those who did not die of such causes. Patients with SR who died of cardiovascular causes more frequently had a GFR <60 mL/min/1.73 m2 ($47.8\%$ vs. $25.0\%$, $$p \leq 0.03$$) and were older than those with SR who did not die of such causes. The multivariable analysis shows that hyperlipidemia reduced the risk of cardiovascular mortality in patients with AF, whereas in patients with SR, an age of ≥75 years was the predisposing factor for such mortality. Conclusions: Cardiovascular mortality of patients with acute ischemia did not differ between patients with AF and SR. Hyperlipidemia reduced the risk of cardiovascular mortality in patients with AF, whereas in patients with SR, an age of ≥75 years was a predisposing factor for such mortality.
## 1. Introduction
Acute peripheral arterial ischemia is defined as a rapidly developing loss of perfusion, resulting in variable ischemic clinical manifestations and potential necrosis of the involved organ or extremities. This disease is associated with high morbidity and mortality [1]. The incidence of acute limb ischemia is ~1.5 cases per 10,000 persons per year [2]. Diagnostic errors and delays in treatment may lead to the loss of limbs, related to the lack of sufficient time for new blood vessel growth to compensate for the loss of perfusion, or even loss of life [3]. According to previous studies, 15–$20\%$ of patients die within the first year after acute lower extremity ischemia, and most of these deaths occur in the peri-operative period. Apart from higher in-hospital mortality, patients with acute arterial ischemia experience adverse events, such as the following: congestive heart failure exacerbation, myocardial infarction, deterioration in renal function, and respiratory complications [4]. There are various causes leading to the occurrence of acute limb ischemia, including the following: arterial embolism ($46\%$), in situ thrombosis ($24\%$), complex factors ($20\%$), and stent- or graft-related thrombosis ($10\%$) [5]. Atrial fibrillation (AF), a sustained cardiac arrhythmia, is the most common cause of embolism and a risk factor for peripheral arterial occlusion [6]. Thromboembolic complications of AF frequently cause morbidity and mortality [7]. Embolism-associated limb ischemia was demonstrated to be related to a higher mortality risk when compared to the occlusion of an artery with local thrombosis in atherosclerotic etiology [8]. Moreover, the mortality risk in patients with AF-related peripheral embolic complications was greater than in those with myocardial infarct-related embolism [9].
The aim of the study is to assess the incidence of cardiovascular mortality in patients with acute peripheral arterial ischemia and either AF or sinus rhythm (SR) and to attempt to identify the predisposing factors of cardiovascular mortality in these two groups of patients during long-term follow-up.
## 2.1. Study Design and Participants
This is a retrospective observational study involving 200 consecutive patients with acute peripheral ischemia and either AF ($$n = 67$$) or SR ($$n = 133$$), who were admitted to the Department of Vascular Surgery between January 2014 and November 2018. The median follow-up was 21 IQR (7–37) months.
A complete medical history, including information of prior treatment, was obtained from all participants. The diagnosis of acute arterial ischemia was based on the subjective and physical examination and, depending on the area of ischemia, on the different imaging examinations [10]. All patients underwent a duplex ultrasound examination (DUS). Computed tomography angiography (CTA) was performed in $78\%$ of cases of acute limb ischemia and $92\%$ cases of mesenteric ischemia.
AF was diagnosed as defined by the European Society of Cardiology [11]. The risk of thromboembolic complications was assessed with the use of the CHADS2 and CHA2DS2-VASc scales at hospital admission. CHADS2 and CHA2DS2-VASc scores did not include the thromboembolic event that resulted in hospitalization. The CHADS2 score was calculated for each patient in accordance with the following guidelines: congestive heart failure, hypertension, diabetes mellitus, and age ≥75 years were counted as 1 point each; a history of stroke or transient ischemic attack counted as 2 points [12]. The CHA2DS2-VASc score was also calculated for each patient by current clinical guidelines. This score ranges from 0 to 9 points and includes the following clinical characteristics: congestive heart failure or left ventricular dysfunction (1 point), hypertension (1 point), age ≥75 years (2 points), diabetes mellitus (1 point), prior stroke/transient ischemic attack (TIA) or thromboembolism (2 points each), vascular disease (1 point), age 65–74 years (1 point), and sex category (female; 1 point) [13]. The sum of all factors gives the individual patient’s risk score. In addition, the estimated glomerular filtration rate (eGFR) was calculated using the simplified four-variable Modification of Diet in Renal Disease (MDRD) formula: eGFR = 186 × (serum creatinine)−1.154 × (age)−0.203 × 0.742 if female] [14].
The study was approved by the university Bioethics Committee (no. $\frac{111}{2020}$) and was conducted according to the principles of the Declaration of Helsinki. The university Ethics Committee waived the requirement of obtaining informed consent from the patients.
## 2.2. Surgical Treatment
All patients were treated surgically using open thrombectomy/embolectomy, mechanical thrombectomy (MTH), direct catheter thrombolysis (DCT), or primary amputation. The type of treatment depended on the cause and the depth of ischemia and the patient’s general condition. Revascularisation was not performed in the case of advanced intestinal and limb ischemia/necrosis. In patients with major comorbidities, who experienced significant improvement in their clinical state after conservative treatment, the surgical treatment was postponed for elective preparation. The condition of peripheral circulation was critical in deciding on the treatment method. Due to the lack of peripheral flow in most patients, the implementation of DCT was performed. The restoration of peripheral blood flow was an introduction to other procedures in case of significant stenosis; the treatment was discontinued if proper circulation was restored.
All patients treated with MTH and DCT underwent arteriography. Unfractioned Heparin (UFH) was administered intravenously in all patients with acute ischemia; it was infused with an infusion pump to prolong APTT 2.5–3 times. The infusion was preceded by a bolus of 5000–10,000 IU heparin. In most cases, limb open embolectomy/thrombectomy was performed under epidural or spinal anesthesia. In patients in whom this type of anesthesia was contraindicated, local anesthesia was used along with sedation. The clots were removed using a Fogarty’s catheter, proximally and peripherally, until an acceptable inflow and outflow were obtained. Endarterectomy was performed in the presence of massive atherosclerotic plaques at the site of the artery incision. Arteriotomy was closed with a primary suture or resorting to patch angioplasty in case of a small diameter of the artery. In the absence of a good inflow or outflow, patients were eligible for bypass. In the case of suspected subfascial edema, fasciotomy was performed.
Open visceral thrombectomy/embolectomy was performed by laparotomy, either trans- or retroperitoneal in nature. Patch angioplasty, transposition, or bypass were performed, depending on the etiology of the occlusion. In the case of intestinal necrosis, its resection was performed within the limits of visually healthy tissue. Patients were always qualified for a “second look” within 24–48 h.
## 2.3. Endovascular Treatment
Using WinPepi® V11.65, the required sample for a survival test was computed with a $90\%$ statistical power (β) and a 0.05 significance level [15]. Although bigger event rate disparities are stated, the sample was calculated at 147, with a hazard ratio of 1.6 (1.3 to 1.9) across groups [16,17]. A total estimated sample of 154 was collected with an expected loss-to-follow-up rate of $5\%$.
DCT and MTH were the first-choice endovascular methods used to treat acute ischemia in all areas [18]. Access via the common femoral or left radial artery was used. Percutaneous transluminal angioplasty (PTA) and stenting were performed. At the time of DCT infusion, Alteplase (Actilyse-Boehringer-Ingelheim®) 1 mg/h was administered (5 mg bolus). UFH was administered simultaneously to the sheath (500 IU/h). The fibrinogen and APTT levels were set four times a day. DCT was terminated earlier if the fibrinogen level fell below 150 mg/dL [19,20]. Control arteriography was performed before sheath removal. During mechanical thrombectomy, AngioJet (Boston Scientific, Marlborough, MA, USA) and Rotarex (Straub Medical, Vilters-Wangs, Switzerland) systems were used with different catheter diameters, depending on the size of the artery. If the procedure’s effectiveness was insufficient, DCT or PTA/stent was performed. The Spider embolic protection system (Medtronic) was used during some procedures on the arteries of the lower limbs. After the surgery, UFH was administered intravenously using an infusion pump with the target of prolonging APTT 2.5–3 times. In the case of simultaneous occlusion of the celiac trunk and superior mesenteric artery, we tried to open both arteries. DCT was used carefully because of the known mechanism of endogenous thrombolysis occurring during intestinal ischemia [21].
## 2.4. Study Endpoint
The study endpoint was cardiovascular mortality during long-term observation.
## 2.5. Statistical Analysis
Categorical data are expressed as numbers of patients and percentages. The Chi-squared test or Fisher’s exact test were used to compare proportions. Numeric variables are presented as medians and quartiles and compared using the Mann–Whitney U test, because their distribution was not normal (assessed by the Shapiro–Wilk test, graphical curve analysis and kurtosis). In the context of survival analysis, the endpoint was defined as cardiovascular death. The follow-up was calculated as the number of days from surgery to death (cardiovascular or not) or to the end of the study (for live patients). Survival curves for AF and SR groups were created by the Kaplan–Meier method.
Patients’ characteristics and type of surgery were assessed in univariable Cox proportional hazards regression models to evaluate the relationship with cardiovascular death. The regressive predictive model was created by resorting to regression analysis and dimension reduction by the method of backward feature elimination. Variables with clinical relevance included in the multivariate analysis were associated with the group including cardiovascular and non-cardiovascular death in the univariate analysis, with statistical significance $p \leq 0.1.$ Some multivariable Cox proportional hazards models are also presented. Hazard ratios (HR) in univariable and multivariable Cox models were estimated, along with $95\%$ confidence intervals. A stratified analysis was conducted for SR and AF patients. Cox proportional hazards regression models were not created for categorical variables with less than five patients in any category. Statistical tests were two-tailed, and p-values < 0.05 were considered significant. All statistical analyses were performed using the R software package version 3.6.2.
## 3.1. Characteristics of the Study Group
In the present study of 200 patients, 67 ($33.5\%$) had AF and 133 ($66.5\%$) had SR. Patients with AF were statistically significantly older (78.0 vs. 70.0, $$p \leq 0.003$$) and women represented the majority ($62.7\%$ vs. $39.1\%$, $$p \leq 0.002$$) in this group, in comparison to the group with SR. The incidence of comorbidities did not differ significantly between groups; only ischemic heart disease was more prevalent in the group with AF ($55.2\%$ vs. $35.3\%$, $$p \leq 0.007$$). Patients with AF had statistically lower eGFR than patients with SR (64.3 vs. 73.7, $$p \leq 0.03$$). In this group of patients, the results of CHADS2 and CHA2DS2-VASc were also higher than in patients with SR. In patients with AF, embolic occlusion was more frequent, while in patients with SR, the occlusion was associated with thrombotic material. Thromboembolic material was observed in both groups, predominantly in the lower limbs ($74.6\%$ vs. $84.2\%$, $$p \leq 0.10$$). The clinical characteristics of patients with AF and SR are presented in Table 1.
## 3.2. Incidence of Mortality in Patients with Acute Peripheral Arterial Ischemia and Atrial Fibrillation or Sinus Rhythm
The median follow-up in the group with AF was 20.9 (IQR: 7.4, 34.3) months, and in the group with SR it was 22.6 (IQR: 7.4, 40.3) months, $$p \leq 0.45.$$ There were no differences in all-cause mortality between the AF group and SR group ($43.3\%$ vs. $31.6\%$, $$p \leq 0.10$$). Cardiovascular mortality was similar in patients with AF and SR ($28.4\%$ vs. $18.8\%$, $$p \leq 0.12$$) (Table 2).
The analysis of Kaplan–Meier curves shows that in the initial period after surgery, the chances of survival were similar in both groups (Figure 1).
## 3.3. Factors Predisposing to Cardiovascular Mortality
In the group with AF, in patients who died of cardiovascular causes, the prevalence of PAD and hypercholesterolemia was lower than in those who did not die of such causes (PAD $31.6\%$ vs. $58.3\%$, $$p \leq 0.048$$; hypercholesterolemia $5.3\%$ vs. $31.2\%$, $$p \leq 0.03$$) (Table 3).
The comparison of patients with SR who died of cardiovascular causes and those with SR who did not die of such causes revealed that the first group more frequently had GFR < 60 mL/min/1.73 m2 ($47.8\%$ vs. $25.0\%$, $$p \leq 0.03$$), and they tended to be older (age >75 years: $60.0\%$ vs. $33.3\%$, $$p \leq 0.04$$) (Table 4).
In this study, the CHA2DS2-VASc score was similar in patients who died of cardiovascular causes and in those who did not die of such causes, both in the AF and in the SR group.
The multivariable analysis showed that the presence of hyperlipidemia reduced the risk of cardiovascular mortality in patients with AF, whereas in the case of patients with SR, an age of ≥75 years was the factor predisposing one for such mortality (Table 5).
## 4. Discussion
AF increases the risk of thromboembolic episodes, which are often responsible for high morbidity and mortality in this group of patients [7,22,23]. The study of Barreto et al. [ 24], comprising patients with peripheral arterial embolism, confirmed the role of AF in the pathogenesis of acute limb ischemia. In our study, the estimated glomerular filtration rate (eGFR) of AF patients was significantly lower than in SR patients. Moreover, they had higher CHA2DS2-VASc scoring in comparison to patients with sinus rhythms. Despite this, only $35.8\%$ of patients in the AF group were receiving oral anticoagulants, and even fewer were treated with antiplatelet agents ($13.4\%$) before hospital admission. In turn, SR patients were significantly more often administered antiplatelet treatment (APT) ($40.6\%$), which is in accordance with current recommendations. Howard et al. [ 5] demonstrated that premorbid levels of anticoagulation in patients suffering from acute events of cardioembolic origin, as well as known AF, are deficient. However, the vast majority of patients with a high thromboembolism risk (CHA2DS2-VASc scores ≥ 2) had no contraindications to anticoagulation. Additionally, Ralevic et al. [ 25], in a prospective observational study of consecutive patients with lower limb amputation, found that despite a high prevalence of AF, patients often did not receive the recommended oral anticoagulation therapy. In this study, the occlusion in patients with AF and SR was mainly localized to the lower extremities. The higher prevalence of acute lower limb ischemia has also been indicated in other studies. Ischemia affecting upper extremities is relatively uncommon, accounting for less than $5\%$ of all cases of limb ischemia [26,27]. In this study, acute peripheral arterial ischemia in AF patients was primarily caused by an embolus ($65.7\%$), and in SR patients by a thrombus ($55.6\%$). This observation was confirmed by Mutirangura et al. [ 28], who revealed that AF was more prevalent in patients with acute arterial embolism than acute arterial thrombosis.
Systematic reviews and meta-analyses have undoubtedly indicated the association of AF with an increased risk of mortality in patients with coronary artery disease [29,30]. However, the prognostic implication of AF in acute peripheral arterial ischemia has not been extensively studied. We did not observe statistically significant differences in mortality between patients with AF and patients with SR, who were operated on due to acute ischemia. Cardiovascular mortality was slightly higher in patients with AF compared to those with SR ($28.4\%$ vs. $18.8\%$); however, this difference failed to reach statistical significance. A similar trend was observed in the study of Ralevic et al. al. [ 25], who demonstrated that lower limb amputation, cardiovascular death, as well as adverse cardiovascular events were more common in patients with AF during follow-up compared with patients without AF. Lorentzen et al. [ 31] showed that AF enhanced the risk of mortality, decreased patients’ quality of life, and increased the number of hospitalizations. Moreover, according to Vohra et al. [ 32], in patients with AF-related peripheral embolic complications, the mortality risk was higher compared to individuals with embolism associated with myocardial infarction. Data from the Reduction of Atherothrombosis for Continued Health Registry [33] demonstrated that AF was an independent predictor of long-term CV events in patients with symptomatic peripheral arterial disease (PAD) [34]. We can only suspect that the lack of statistically significant differences in mortality between AF and SR patients in our study is associated with the relatively small number of AF patients, as well as with the introduction of appropriate treatment, because, as mentioned above, before the hospitalization, many patients were not receiving the best medical treatment.
In this study, we also observed that the CHA2DS2-VASc scale was not a predictor of cardiovascular mortality in patients with AF and SR. *In* general, the CHA2DS2-VASc score can be used to assess the risk of stroke in patients with atrial fibrillation. However, published results show considerable variability in relation to the mortality of AF patients and the correlation with the CHA2DS2-VASc score, especially regarding patient history, drug treatment, and clinical status [35,36,37]. In an observational retrospective cohort study (CONSORT compliant), the predictive value of CHA2DS2-VASc was confirmed in relation to overall all-cause mortality [36]. Patients with higher risk scores had a survival rate of $79.1\%$, while medium-risk and low-risk patients had survival rates of $95.6\%$ and $100\%$, respectively. According to Potpara et al. [ 38], the CHA2DS2-VASc score is a reliable predictor of 30-day unfavorable outcomes of patients with acute ischemic stroke. Its sensitivity and specificity for unfavorable short-term functional outcomes is greater in comparison to other scores, including the CHADS2 and HAS-BLED ($93.5\%$ vs. $92.4\%$ vs. $71.7\%$ and $77.0\%$ vs. $61.5\%$ vs. $69.6\%$, respectively; all $p \leq 0.05$). Despite the CHA2DS2-VASc score differing significantly between the AF and SR groups in our study, it did not correlate with cardiovascular mortality, probably due to the fact that many more patients with AF received appropriate treatment before the hospitalization, which might have influenced their outcomes. The presence of AF may be a primary driver of the administration of therapy for stroke prevention, which decreases mortality rate. Jackson et al. [ 35] confirmed that systemic oral anticoagulant treatment (OAC) was associated with lower rates of all-cause mortality, cardiovascular death, and first stroke/TIA among patients with CHA2DS2-VASc score ≥ 2. Also, in most of studies, this scale was used to assess all-cause death, not cardiovascular mortality.
A univariable analysis of factors modulating the risk of cardiovascular mortality in our population of patients with acute peripheral arterial ischemia and AF demonstrated that PAD and hypercholesterolemia (obesity paradox) reduced the risk of cardiovascular mortality. According to numerous studies, PAD and AF share similar epidemiologic patterns and risk factors, and their presence is related to increased morbidity and mortality [39]. A sub-analysis performed with the use of data from the Reduction of Atherothrombosis for Continued Health Registry demonstrated that the combined presence of AF and PAD significantly increased the rates of cardiovascular (CV) death [33]. Additionally, Lin et al. [ 40] found that the coexistence of AF and PAD considerably enhanced the risk for all major adverse outcomes, and it was associated with at least a two-fold higher risk of CV death than in patients with AF or PAD only. The reduction in cardiovascular mortality related to the presence of PAD in AF patients in this study may be associated with the fact that PAD patients were probably previously intensively treated with antihypertensive, lipid-lowering, and antiplatelet drugs. Indeed, $94.7\%$ of patients in this group used antiplatelet drugs. It is also possible that patients with an earlier diagnosis of PAD introduced dietary changes and ceased smoking, thus decreasing their cardiovascular risk. The importance of hypercholesterolemia as a factor in reducing cardiovascular mortality in the group with AF was also confirmed in a multivariable analysis, in which it decreased the risk of death by $87\%$. Again, such a phenomenon could be associated with the fact that patients with a history of hyperlipidemia were treated with statins and other lipid lowering drugs, which reduced their cardiovascular mortality. Additionally, Clua-Espuny et al. [ 41] found that mortality among AF patients was significantly lower for those treated with statins. The obesity paradox in atrial fibrillation patients, particularly for all-cause and cardiovascular death outcomes, has been extensively described [42,43].
In the case of patients with SR, the univariable analysis revealed a correlation between cardiovascular mortality and age. The risk of cardiovascular mortality increased 1.6 times with every ten years. Additionally, in a retrospective review of patients with acute limb ischemia, the risk of mortality increased with age and renal failure, but also with the female gender, cancer, in situ thrombosis or embolic etiology, cardiac events, and hemorrhagic events [44]. Eliason et al. [ 45] indicated that in patients with acute lower extremity ischemia, an age of less than 63 years was an independent variable associated with a decreased risk of in-hospital mortality. Finally, the analysis of the National Audit of Thrombolysis for Acute Leg Ischemia (NATALI) database confirmed that the mortality of patients who had undergone intra-arterial thrombolysis to treat acute leg ischemia was higher in women and older patients, and in patients with native vessel occlusion, emboli, or a history of ischemic heart disease [46]. The relationship between higher mortality and advanced age may be due to the fact that the prevalence of comorbidities increases with advancing age in many populations. The impact of age was also confirmed in our multivariate analysis.
We also observed that in patients with SR and eGFR < 60 mL/min, the risk of cardiovascular death was 2.48-fold higher when compared to those with higher eGFR. In turn, Kuoppala et al. [ 47] demonstrated that renal insufficiency was among the independent factors associated with in-hospital mortality after thrombolysis. Moreover, they indicated that, among other reasons, renal insufficiency and an age ≥80 years were associated with mortality during follow-up. They suggested that the administration of a contrast agent during angiography may be partly responsible for such a negative relationship in this subgroup of patients. Renal impairment is also more frequent and aggravated in patients with CAD and vascular complications, which can also explain the association between lower GFR and cardiovascular mortality. Maithel et al. [ 48] confirmed the relationship between renal insufficiency and poorer outcomes in patients after open vascular surgery. Additionally, in patients with AF, renal dysfunction proved to be a strong, independent predictor of left atrial appendage thrombus formation [49]. This study has demonstrated that acute peripheral arterial ischemia continues to be associated with high mortality despite advances in endovascular-based therapies and improved critical care.
The choice of the treatment method in patients with acute limb ischemia is difficult. There are no strict criteria defining the risk of reperfusion syndrome after revascularization. Studies on the preoperative inflammatory biomarkers’ neutrophil-to-lymphocyte ratio and platelet-to-lymphocyte ratio are encouraging. Increased preoperative values of these factors may be indicators of a poor outcome and the need for primary amputation [50].
A significant limitation of the study is the small size of the study group. Patients were treated and followed in a single tertiary care center with a high volume, which might affect the external validity of the results. Other limitations involve the lack of data on non-anticoagulant/antiplatelet and other therapy before hospitalization, the lack of detailed information on the post-surgery period, and the lack of data on ischemic events during the follow-up period.
## 5. Conclusions
The mortality of patients operated on due to acute ischemia did not differ significantly between the group of patients with AF and those with SR. Moreover, the CHA2DS2-VASc scale proved not to be a good predictor of cardiovascular mortality in patients with AF and SR. The presence of hyperlipidemia reduced the risk of cardiovascular mortality in patients with AF, whereas in the case of patients with SR, an age of ≥75 years was a factor predisposing one to such mortality. |
# Effect of Different Intensities of Aerobic Exercise Combined with Resistance Exercise on Body Fat, Lipid Profiles, and Adipokines in Middle-Aged Women with Obesity
## Abstract
We aimed to investigate the effect of different intensities of aerobic exercise (VO2max: $50\%$ vs. $80\%$) on body weight, body fat percentage, lipid profiles, and adipokines in obese middle-aged women after 8 weeks of combined aerobic and resistance exercise. The participants included 16 women aged >40 years with a body fat percentage of ≥$30\%$; they were randomly assigned to the resistance and either moderate (RME, $50\%$ VO2max, 200 kcal [$$n = 8$$]) or vigorous aerobic exercise groups (RVE, $80\%$ VO2max, 200 kcal [$$n = 8$$]), respectively. After 8 weeks of exercise, we observed that body weight and body fat percentage decreased significantly in both groups ($p \leq 0.01$). The total cholesterol ($p \leq 0.01$) and LDL ($p \leq 0.05$) levels decreased significantly in the RME group, while triglyceride levels decreased significantly in both groups ($p \leq 0.01$). The HDL levels tended to increase only slightly in both groups. The adiponectin levels decreased significantly in the RVE group ($p \leq 0.05$), and the leptin levels decreased significantly in both groups ($p \leq 0.05$). To prevent and treat obesity in middle-aged women, combined exercise (aerobic and resistance) is deemed effective; additionally, aerobic exercise of moderate intensity during combined exercise could be more effective than that of vigorous intensity.
## 1. Introduction
According to the World Health Organization (WHO), obesity is defined as “abnormal or excessive fat accumulation that presents a risk to health.” Obesity became a global health problem by the end of the 20th century [1]. Obesity refers to the excessive accumulation of fat in the human body due to excessive caloric intake, irregular lifestyle, and lack of physical activity and is a known cause of various adult diseases, such as diabetes, high blood pressure, hyperlipidemia, and cardiovascular disease [2]. Obesity leads to changes in blood lipid profiles, and there is a direct relationship between chronically elevated cholesterol levels (dyslipidemia) and an increased risk of cardiovascular diseases [3,4].
A major breakthrough in the perception of adipose tissue as an endocrine organ was the discovery of adipokines, which are biologically activated substances; leptin was the first such discovery [5,6]. Adiponectin, an adipokine, enhances insulin sensitivity and promotes anti-inflammatory and antifibrotic activities [7]; however, studies show that adiponectin is reduced in patients with obesity and coronary artery disease, suggesting its crucial role in obesity-associated cardiovascular diseases [7,8]. Leptin—an important hormone in the prevention and treatment of obesity—regulates many physiological processes, such as appetite suppression, energy consumption, and non-shivering thermogenesis [9,10]. The circulating levels of leptin are highly proportional to the amount of adipose tissue [11]. Women with a large relative fat mass tend to exhibit two-fold higher leptin levels in circulation when compared with those of men with similar body weight [12]; therefore, the risk of chronic diseases, which is associated with high circulating levels of leptin, is higher in women than in men.
The WHO recommends exercise, which is the most effective modality for the prevention and treatment of obesity [1]; additionally, regular exercise reduces body fat, improves lipid profiles, and changes adipokine levels [13]. In middle-aged women, reduced physical activity may result in lower estrogen secretion and increasing fat mass and central adiposity; these conditions are linked to the development of morbidities, such as type 2 diabetes, hypertension, atherosclerosis, dyslipidemia, and metabolic syndrome [14]. Physical inactivity and lower hormone secretion in middle-aged women can lead to decreased lean mass, muscle strength, and bone mass; in turn, this may cause musculoskeletal diseases, such as sarcopenia, impaired balance and movement, and increasing falls, consequently decreasing the quality of life [15]. Exercise programs for middle-aged women should therefore include resistance exercises that increase lean mass and muscle strength, and combined exercise—aerobic plus resistance exercises—could alter body fat, lipid profiles, and adipokine levels [14]. In previous studies, combined exercises decreased body weight and body fat [16], improved lipid profiles [14,17], and induced positive changes in adiponectin and leptin levels [14,18].
The most effective exercises for the prevention and treatment of obesity should consider some important factors, such as intensity, volume, frequency, and type of exercise [19]. However, the effects of different aerobic exercise intensities with the same volume and frequency have not yet been elucidated. Therefore, the purpose of this study was to investigate the effects of performing combined resistance and aerobic exercise of different intensities ($50\%$ VO2max vs. $80\%$ VO2max) on body fat, lipid profiles, and adipokines in obese middle-aged women after 8 weeks of exercise. This premise upholds that all participants perform the same amount of exercise (daily energy expenditure of 400 kcal per day by the American College of Sports Medicine’s [ACSMs] recommendation; aerobic exercise—200 kcal and resistance exercise—200 kcal) [20].
## 2.1. Subjects
This study was approved by the Research Ethics Committee of Dongguk University (DGU IRB 20200033-1). We included 16 middle-aged women (age > 40 years) with obesity (>$30\%$ body fat) and without any previous diagnosis of metabolic disease or other health problems. The participants did not perform any regular physical activity or exercise. The participants were informed of the procedures and signed a document of informed consent before participating. They were instructed to maintain their typical diet pattern throughout the study, and compliance with this instruction was assessed using food questionnaires (1-day recall). The typical diet was based on the daily recommended calorie intake of 2000 kcal for Korean women and comprised foods commonly consumed by Koreans; expert feedback on the diet was provided. The participants were randomly assigned to the resistance and moderate aerobic exercise (RME, $50\%$ VO2max + total body resistance exercise [TRX], $$n = 8$$) and resistance and vigorous aerobic exercise (RVE, $80\%$ VO2max + TRX, $$n = 8$$) groups according to the intensity of exercise. The physical characteristics of the participants are presented in Table 1.
## 2.2. Body Fat Measurement
The body fat was measured by a certified expert in three regions (triceps, front of thigh, and iliac crest) using the skinfold thickness method, and all processes were conducted according to the International Society for Advanced Kinanthropometry. After measurement, body fat was estimated using the formula by Siri [21] and Jackson et al. [ 22].
## 2.3. Blood Samples and Analysis
Blood samples were obtained from the antecubital vein after a 12-h fast (both before and after 8 weeks of exercise) and collected into vacutainer tubes with EDTA under the same conditions and time periods. The collected blood was centrifuged at 3000 rpm for 10 min and stored in a deep freezer at −70 °C. The total cholesterol levels and the respective fractions were analyzed using enzymatic colorimetric assays (Modular Analytics Co., Manchester, UK).
## 2.4. VO2max Measurement
To estimate VO2max, a 1-mile (1609 m) walk was performed by the participants wearing a heart rate monitor (Polar Electro, Kempele, Finland); the rating of perceived exertion was checked every minute to adjust the exercise duration and speed during the test. After the test, the VO2max per body weight was estimated based on the exercise time and heart rate with the following formula [23]:VO2max (ml/min/kg) = 132.853 − (0.1692 × body mass in kg) − (0.3877 × age) + (6.315 × sex) − (3.2649 × time in min) − (0.1565 × HR)[1] Sex: man = 1, woman = 0; HR: Heart rate immediately after the end of walking.
## 2.5. Exercise Program
The exercise program used in this study is shown in Table 2 and was performed five times a week by each group. Energy consumption was measured using the Polar heart rate monitor to measure 400 kcal from when the target intensity was reached (daily energy expenditure of 400 kcal per day by the ACSMs recommendation) [20]. The exercise intensity and rating of perceived exertion were continuously supervised, and the exercise speed was adjusted until the end of the exercise. After a 15-min warm-up under expert supervision, the participants ran on a treadmill at $50\%$ VO2max in the RME group and $80\%$ VO2max in the RVE group to reach a 200-kcal expenditure. The aerobic exercise was directly supervised by an expert so that the exercise intensity remained fixed for each group. The average times of RME and RVE were 45–48 min and 30–33 min, respectively. As the aerobic component was based on caloric expenditure measured through the heart rate response, the time duration of sessions was individualized to each participant. Thereafter, total body resistance exercise (TRX) was performed. The TRX exercises comprised the use of resistance bands to perform various upper body, lower body, and abdominal exercises (Table 2). Before starting the program, a detailed explanation of the movements was given to the participants, and an expert supervised all exercises. TRX was performed at 60–$70\%$ of the HRmax in both groups, for a further 200-kcal expenditure [20]. There were no modifications made to the exercises.
## 2.6. Statistical Analysis
All data analyses in this study were conducted using IBM SPSS Statistics ver. 22.0 (IBM, Armonk, New York, NY, USA). The means and standard errors of all measurements were calculated. The sample size for this study was calculated using the G-Power program (University of Dusseldorf, Dusseldorf, Germany). We set the effect size at 0.2, power at 0.9, number of groups at 2, and number of measurements at 2 for two-way analysis of variance (ANOVA). As the study included a human intervention process, specifically during the COVID-19 pandemic, it was particularly difficult to recruit participants and maintain the study, hence the small sample size with a reduced Z power. A two-way ANOVA was used to determine the effects of interactions between group (RME vs. RVE) and time (pre- vs. post-) on the measured variables. When there were significant interaction effects, the post-hoc was analyzed using LSD, and the level of significance was set at α = 0.05.
## 3.1. Body Weight and Body Fat
After eight weeks of exercise, there were no significant interaction effects between group and time on body weight and body fat percentage. In the main effect test, although body weight ($p \leq 0.01$, RME; 64.58 ± 13.69 vs. 61.53 ± 14.18, RVE; 66.95 ± 10.87 vs. 63.64 ± 9.51) and body fat percentage ($p \leq 0.01$, RME; 34.98 ± 3.39 vs. 28.73 ± 4.75, RVE; 35.23 ± 4.25 vs. 28.15 ± 4.86) significantly decreased after exercise in both groups, there was no difference between the groups (Figure 1 and Figure 2).
## 3.2. Lipid Profiles
After eight weeks of exercise, there were no significant interaction effects between group and time for total cholesterol (TC), triglyceride (TG), low-density lipoprotein (LDL), and high-density lipoprotein (HDL). The main effect test results for TC ($p \leq 0.01$) and LDL ($p \leq 0.05$) significantly decreased after exercise in the RME group, and TG ($p \leq 0.01$) significantly decreased after exercise in both groups. Although HDL demonstrated an increasing trend, it was not statistically significant. There were no differences between the groups for any of the variables (Table 3, Figure 3, Figure 4, Figure 5 and Figure 6).
## 3.3. Adipokines
Adiponectin demonstrated a significant interaction effect between group and time ($p \leq 0.05$); however, there was no interaction effect on the leptin concentration in the blood. After 8 weeks of exercise, adiponectin levels significantly decreased in the RVE group ($p \leq 0.05$), and leptin levels significantly decreased in both groups ($p \leq 0.05$) (Table 4, Figure 7 and Figure 8). The fasting glucose level did not reach statistical significance after exercise, despite exhibiting a decreasing trend.
## 4. Discussion
This study compared the effects of different aerobic exercise intensities for combined exercises ($50\%$ VO2max + TRX vs. $80\%$ VO2max + TRX) on body fat, lipid profiles, and adipokines in middle-aged women. This study revealed important findings regarding the difference in aerobic exercise intensity when combined with resistance exercise and indicated that moderate-intensity aerobic exercise had positive effects on more variables when combined with resistance exercise (TC, TG, LDL, adiponectin, and leptin).
Changes in body weight and body fat are important factors related to the treatment of health problems and diseases [20]. Regular exercise has a positive effect on changes in body weight and body fat [13]. The results of this study are consistent with those of previous studies, which showed that a combination of aerobic and resistance exercise—which increases lipid oxidation, fat-free mass, and resting metabolic rate—significantly reduces weight and body fat [16,24]. Thus, combined exercise may be a more efficient exercise program for decreasing body weight and body fat in obese middle-aged women, regardless of the exercise intensity. However, there was a discrepancy between the expected calorie consumption by the exercise program and the amount of body fat loss. Our findings suggest that adding healthy lifestyle interventions, such as exercise, may motivate participants to choose healthier dietary options, which may further impact calorie expenditure and subsequent weight loss.
Aerobic exercises—such as walking, jogging, and running—are traditionally performed to alter blood lipid profiles, with various results being reported according to exercise intensity [25,26]. O’Donovan et al. [ 25] controlled the exercise volume to directly assess the impact of aerobic exercise intensity. In their study, participants of the moderate- ($60\%$ VO2max) and high-intensity ($80\%$ VO2max) exercise groups completed three 400 kcal sessions weekly for 24 weeks. It was reported that TC and LDL levels only significantly decreased in the high-intensity group ($p \leq 0.05$). Kraus et al. [ 26] reported that LDL and TG levels significantly decreased, while HDL levels significantly increased, following high-intensity aerobic exercise ($p \leq 05$); however, previous studies have reported that moderate-intensity aerobic exercise also significantly decreases blood lipid profiles [27,28]. These results suggest that aerobic exercise is a factor in lipid reduction, regardless of exercise intensity. Although there is limited data on the effects of combined aerobic and resistance exercise, several researchers have suggested that some combined exercises can effectively lower blood lipid profiles and increase HDL. Some previous studies have reported that TC [29], TG [17,30], and LDL [29,30] levels decreased significantly, while HDL [19,31] levels increased significantly after combined exercise (moderate aerobic exercise plus TRX). In this study, the TC levels in the blood decreased significantly in the RME group ($p \leq 0.01$), while the TG levels decreased significantly in both groups ($p \leq 0.01$). The LDL levels decreased significantly in the RME group after exercise ($p \leq 0.05$), and although it did not reach statistical significance, the HDL levels tended to increase slightly in both groups. These results demonstrate that moderate-intensity exercise may activate fat metabolism more effectively and may lead to additional physiological effects, such as increased muscle strength and lean body mass, owing to TRX, which was included in the combined exercise regimen [32].
Although most adipokines secreted by adipocytes have a positive correlation with obesity, adiponectin is negatively correlated with obesity: the levels of adiponectin in the blood decrease with increasing obesity [33]. A decrease in the adiponectin levels in the blood was reported to play an important role in the development of atherosclerosis as it reduced the inhibitory effect on atheromatous production [34]. Additionally, a decrease in adiponectin levels has been reported in patients with obesity [35], type 2 diabetes [36], and cardiovascular disease [34,37].
Adiponectin is closely related to various chronic diseases and is known to be significantly affected by exercise and reductions in body weight and body fat. Lim et al. [ 38] reported that the levels of adiponectin increased with a decrease in body weight and body fat percentage after combined exercise. However, Hara et al. [ 39] reported no change in adiponectin levels despite a decrease in body fat after combined exercise, while Paulo et al. [ 24] and Langleite et al. [ 40] reported a decrease in adiponectin levels. Although both groups in our study exhibited significantly reduced body weight and body fat after 8 weeks of exercise, the levels of adiponectin decreased significantly in the RVE group, while there were no changes in the RME group. The adiponectin level decreased or exhibited no change because this level is inversely correlated to the adiponectin receptor, which is expressed in the muscle and adipose tissue after exercise [40,41]. Another possible reason is that the increase in catecholamines secreted during exercise and the decreased fasting glucose level after exercise suppressed the gene expression of adiponectin [42,43]. Additionally, there are differences in the degree of weight loss compared to that reported in previous studies. In this study, there was a significant decrease in body weight and body fat; however, the two types of aerobic exercise intensities for combined exercise did not have a positive effect on the increase in blood adiponectin. Therefore, the effect of aerobic exercise intensity combined with resistance training on adiponectin levels remains unclear.
The levels of leptin in the blood increase in proportion to the amount of body fat [11], and it has been reported that blood leptin levels are approximately twice as high in women as those in men with the same body fat (%) [12]. In addition, the risk of various chronic diseases is higher in women. Previous studies reported that leptin levels decreased with a decrease in body weight and body fat after aerobic exercise [44,45]. However, other studies have reported no change in the levels of leptin despite a decrease in body weight and body fat after resistance exercise [46,47] and combined exercise [38]. In the present study, there was a significant reduction in weight and body fat percentage along with leptin levels in both groups after 8 weeks of exercise; this finding is consistent with the results of several previous studies. These results indicate that a decrease in body fat due to exercise is accompanied by a decrease in leptin levels in the blood; this phenomenon is thought to occur due to changes in the fat mass stored in the body through an improved balance of energy and fat metabolism [48]. Additionally, regarding catecholamine changes during exercise, an increase in the activity of norepinephrine induces a decrease in the levels of leptin in the blood by improving the use of fatty acids and reducing leptin resistance [49]. In this present study, effects were observed, but no differences existed between the two exercise programs. Additional studies are needed to examine how the difference in exercise intensity affects other obesity-related factors, such as gut hormones and other adipokines. Although we attempted to manage them, we could not tightly control all individual activities and diets. The limitations of our study include the small sample size, control of calorie intake, and diet composition.
## 5. Conclusions
The purpose of this study was to investigate the effects of different aerobic exercise intensities for combined exercise ($50\%$ VO2max vs. $80\%$ VO2max with TRX) on body weight, body fat, lipid profiles, and adipokines. We reported that weight and body fat decreased significantly in both groups ($p \leq 0.01$), TC levels in the blood decreased significantly in the RME group ($p \leq 0.01$), and TG levels decreased significantly in both groups ($p \leq 0.01$) after exercise. The LDL levels decreased significantly in the RME group ($p \leq 0.05$), while HDL levels tended to increase slightly in both groups after exercise. The adiponectin levels decreased significantly in the RVE group ($p \leq 0.05$), and the leptin levels decreased significantly in both groups ($p \leq 0.05$) after exercise. An interesting finding of this study is that the effect of combined exercise was similar to that of aerobic exercise alone for improvements in body fat, lipid profiles, and adipokine levels. Although, additional physiological effects can be expected due to the resistance exercise component of the combined exercise.
In conclusion, combined aerobic plus resistance exercises could be effective in preventing and treating obesity in middle-aged women. Additionally, aerobic exercise of moderate intensity during combined exercise could be more effective than that of vigorous intensity. |
# Walk Score and Neighborhood Walkability: A Case Study of Daegu, South Korea
## Abstract
Walking is a popular physical activity that helps prevent obesity and cardiovascular diseases. The Walk Score, which measures neighborhood walkability, considers access to nine amenities using a geographic information system but does not deal with pedestrian perception. This study aims to [1] examine the correlation between access to each amenity, an individual component of the Walk Score, and perceived neighborhood walkability and [2] investigate the correlation with the perceived neighborhood walkability by adding variables of pedestrian perception to the existing Walk Score components. This study conducted a survey with 371 respondents in Daegu, South Korea, between 12 October and 8 November 2022. A multiple regression model was used to examine the correlations. The results showed no association between perceived neighborhood walkability and the individual component of the Walk Score. As variables of environmental perception, the fewer hills or stairs, the more alternative walking routes, the better separation between road and pedestrians, and the richer the green space, the more people perceived their neighborhood as walkable. This study found that the perception of the built environment had a more substantial influence on perceived neighborhood walkability than the accessibility to amenities. It proved that the Walk Score should include pedestrian perception and quantitative measurement.
## 1. Introduction
Walking is an inexpensive and popular physical activity that helps prevent obesity and cardiovascular disease [1,2,3,4]. The advantages of walking have increased interest in a walkable environment [5,6]. The built environment plays a crucial role in determining the quality of walking, and a well-designed built environment positively affects walkability [7,8,9]. Therefore, it is essential to understand how the characteristics of the built environment are related to walking.
Various indices have been developed to measure the built environment regarding walkability [1,10,11,12]. The Walk Score—a user-friendly and convenient index—is widely used in many studies [11,13]. The index enables comparing environmental walkability among countries and is used globally [14]. Walk Score comparison between cities in Western countries and Seoul in South Korea enables intriguing research; such comparison can draw policy implications from environmental walkability differences between countries [15].
Kim et al. measured the Walk Score in Seoul and confirmed some significant correlation with pedestrian satisfaction [16]. However, this study points out that although the Walk *Score is* efficient as a walkability index, it is limited in its application to the Asian context. The built environment in Korea has dense and mixed land use, and amenities are oversupplied in most cities with these characteristics [17,18,19,20]. Consequently, people have easy access to amenities by walking. In this context, Korean cities can receive a high Walk Score, but explaining Korea’s walkability only with accessibility to the destinations, which constitutes the Walk Score, can cause distorted results [21]. Therefore, it is necessary to consider the additional variables required to develop the Walk Score as a walkability index suitable for the Korean context.
The Walk *Score is* a quantitative indicator calculated by nine types of access to amenities and two types of pedestrian friendliness [22]. However, it has shortcomings in that it does not include the pedestrians’ perception of the built environment [7]. Bereitschaft said evaluating walkability only based on the Walk Score has low accuracy since it does not consider people’s perception of the neighborhood environment, even though it has an advantage in understanding the overall neighborhood walkability [11]. Tuckel and Milczarski found that the Walk Score was associated with walking for transport but not with walking for leisure purposes. They did not examine the association between the Walk Score and the perceived neighborhood walkability, and therefore, it will be necessary to examine the relationship between them [6]. Perception of the built environment in which people live determines their walking attitude and willingness to walk, and from this perspective, it is essential to understand citizens’ perception of the built environment that affects walking [1,23,24,25].
Therefore, this study aims to [1] find out how much the nine amenity elements constituting the Walk Score are related to the perceived neighborhood walkability of the residents and [2] add variables measuring people’s awareness of the built environmental walkability to the Walk Score and examine the correlation between the perceived neighborhood walkability.
## 2.1. Concept of the Walk Score
The Walk *Score is* a free public web-based tool for measuring local walkability and is serviced in the USA, Canada, Australia, and New Zealand. The Walk *Score is* determined by its accessibility to nine amenities (grocery, restaurants, shopping, coffee shops, banks, parks, schools, books, and entertainment) for each address [22]. When the facility is within 0.25 miles, it obtains a full score using a distance decay function. The score decreases as the distance increases, providing a score for the amenity within up to 1.5 miles [26].
Based on this, scores are generated according to the network distance from the address to the destination and the facility’s weight. Each amenity obtains a different weight, and the considered number of facilities differs (Table 1). The weights and the number of facilities were determined based on previous studies of walkability. Then, it multiplies 6.67 by each amenity’s weight and aggregates the numbers to obtain a normalized score from 0 to 100. Meanwhile, penalties based on average block length and intersection density, considered pedestrian friendliness, deduct scores by 1–$10\%$ [22]. The Walk *Score is* calculated in this method, and the scores are divided into five tiers: 0–24 points are “car-dependent”, 25–49 points are “somewhat car-dependent”, 50–69 points are “somewhat walkable”, 70–89 points are “very walkable”, and 90–100 points are “walker’s paradise” [26].
## 2.2. Built Environmental Factors and Walking
Several studies have found a significant association between the built environment and walking [24,27,28,29,30,31,32,33,34,35,36,37,38,39]. This study classified built environmental factors that affect walking into four categories: convenience, connectivity, safety, and comfort. First, on the convenience of the built environment, Kim et al. and Lee et al. investigated the factors affecting pedestrian volume and satisfaction in Seoul and found that wider sidewalks increase pedestrian volume and satisfaction, while steep roads negatively affect them [24,33]. Herrmann-Lunecke et al. examined how pedestrians’ perception of the built environment affects the walking experience in Santiago, Chile [30]. This study showed that pedestrians are happier when the sidewalk is wider, while narrow sidewalks invoke anger and fear. Similarly, Zumelzu Scheel et al. investigated pedestrians’ perceptions of the built environment in southern Chile and confirmed that wider sidewalks in good condition promote walking [39]. These studies show that convenience is related to the ease and efficiency of walking and making people willing to walk. Therefore, factors that measure convenience may include the presence of various facilities, sidewalk width, sidewalk condition, hills and stairs, and pedestrian shelters.
Second, on connectivity, Adkins et al. analyzed how urban design characteristics affect the perception of walking environment attractiveness [27]. This study showed that walking environment attractiveness increases with better pedestrian connectivity. Liao et al. examined factors influencing walking time for people in Taiwan [34]. The study showed that if the roads are well connected, people are likelier to walk more than 150 min weekly. Furthermore, Ferrari et al. examined the relationship between the perceived built environment of neighborhoods and walking and cycling in eight Latin American countries [29]. As a result, the study found that people were more likely to choose walking for transport when there were more alternative routes in the neighborhood. Meanwhile, Nag et al. identified that well-connected pedestrian road without obstacles promotes pedestrian satisfaction [35]. Connectivity is a factor evaluating whether walking is uninterrupted and whether the road network is well connected. Specifically, more alternative routes, better sidewalk connections, and fewer pedestrian obstacles can be the factors of connectivity and will further increase pedestrian walking.
Third, Yu et al., Ariffin and Zahari, and Oyeyemi et al. studied the safety factor of the built environment [28,36,37]. Yu et al. investigated how the elements of perceived neighborhood walkability are related to well-being and loneliness among older adults in Hong Kong [37]. The study found that traffic safety is significantly associated with well-being, which decreases when pedestrians face difficulties with walking due to heavy traffic. Meanwhile, Ariffin and Zahari examined the built environmental factors that can promote walking behavior in Malaysia [28]. This study showed that reducing the risk of crime motivates people to walk, emphasizing the importance of safety awareness to encourage walking. Oyeyemi et al. investigated the relationship between older adults’ sedentary time and attributes of the neighborhood environment in Nigeria [36]. The results showed that lacking safety from crime is likely to increase older adults’ sedentary time. The safety category measures whether pedestrians can walk safely from traffic collisions and crime. Accordingly, variables such as street segregation, crosswalk and traffic lights, and traffic volume can be used as the safety factors from traffic collisions, and security facilities (CCTV, streetlights, etc.) may be employed as safety factors from crime.
Fourth, Zhang et al. investigated the relationship between older adults’ frequency and duration of the walking trip and the built environment in the Zhongshan Metropolitan area, China [38]. The study found that older adults were more encouraged to walk when the percentage of green space land use was higher. Lee et al. examined the correlation between the neighborhood environment variable and neighborhood satisfaction [31]. The result showed that perceived aesthetics, such as trees in the neighborhood, no trash, attractive architecture, and natural scenery, positively correlated with neighborhood satisfaction. In another study, Lee et al. examined the factors affecting pedestrian satisfaction according to land use and road type [32]. Results showed that green space positively influenced pedestrian satisfaction, and clean streets in the commercial district also increased satisfaction. The comfort factor is how pleasant the built environment is to walk, and it measures pedestrians’ perception of green space, noise levels, etc. More green spaces and natural scenery, cleaner streets, less odor and smoke, and lower noise levels can be comfort factors that promote pedestrian walking.
As we reviewed above, the built environmental factors that affect walking were summarized into four categories: convenience, connectivity, safety, and comfort. According to the previous studies examined, the variables of walkable built environments that can be investigated in the survey questionnaire of this study were selected as follows: [1] convenience: various facilities, sidewalk width, sidewalk conditions, hills and stairs, and pedestrian shelters, [2] connectivity: multiple alternative routes, sidewalk connection, and pedestrian obstacles, [3] safety: pedestrian segregation, crosswalk and traffic lights, traffic volume, and security facilities, [4] comfort: green spaces, natural scenery, street cleanness, odor and smoke, and noise level. We will use 17 items from four categories as the independent variables of the built environmental perceptions in this study.
## 3.1. Research Area and Data Collection
This study covered the city of Daegu, located in the southeastern part of South Korea (Figure 1). Daegu is 883.7 km2 wide and had a population of 2,385,412 as of 2021 [40]. This study used survey data from the larger project (Healthy Walking Project) and was conducted from 12 October to 8 November 2022. This study was approved by the institutional review board of the research team and surveyed individuals aged 18 or older to examine the awareness of neighborhood walkability in the built environment. The questionnaire was distributed to a total of 487 people, and 371 valid responses were used for analysis in this study.
## 3.2.1. Dependent Variable: Perceived Neighborhood Walkability
The dependent variable of this study, the perceived walkability of the neighborhood, was collected through a survey. The respondents evaluated the degree of the walkability of their neighborhood between 0 and 100 points to the question “How good is the walkability of your neighborhood?”. The average of the perceived neighborhood walkability was 76.10 (SD = 17.57).
## 3.2.2. Independent Variables
This study used the weighted accessibility values to each amenity comprising the Walk Score as independent variables. As shown in Table 1, grocery and restaurants have a weight of 3, shopping and coffee shop have a weight of 2, and the remaining five amenities have a weight of 1. For this reason, the amenity accessibility values with the weight are slightly different for each amenity in Table 2. Meanwhile, data on five amenities, grocery, restaurants, shopping, coffee shop, and entertainment, were collected from D-Data Hub [41]. Data on banks were obtained from the Financial Supervisory Service of Korea by requesting location data. Data on parks and schools were collected from Road Name Address [42], address-based industry support services, and data on Books from BigData MarketC [43]. Moreover, the average block length and intersection density related to pedestrian friendliness are factors deducting scores. According to the Walk Score criteria, the average block length is less than 120 m, and the intersection density is higher than 200 at the locations of all respondents in this study. Therefore, when calculating the average block length and intersection density within the 400 m network buffer based on the respondents’ address in this study, there was no deduction at all points, and this study did not need to consider the two pedestrian friendliness factors. Accessibility with the weights of the individual amenities constituting the Walk Score was calculated using ArcGIS 10.5 (Esri, Redlands, CA, USA). Figure 2 shows the locations of nine amenities, which are the Walk Score components, within walking distance from the respondent’s home.
The perception variable that evaluates neighborhood walkability in the built environment was selected based on the literature review, and data were collected through the survey. It consists of 17 items under four categories (Table 2). All the perception variables were measured with a 5-point Likert scale, from strongly disagree [1], disagree [2], neutral [3], agree [4], and strongly agree [5].
## 3.2.3. Control Variables
As control variables, several individual characteristics of the respondents, such as gender, age, and weekly minutes of walking, were used, and the data were collected through the survey. For weekly minutes of walking, the number of walking days per week is multiplied by the average walking time (as minutes). In the descriptive statistics, $37.5\%$ of the respondents were men, and $62.5\%$ were women. The average age of respondents was 34.80 (SD = 14.49), and the weekly minutes of walking was 161.62 (SD = 115.74).
## 3.3. Statistical Analysis
This study conducted a regression analysis to investigate the relationship among perceived neighborhood walkability, the Walk Score, and built environment awareness. This study used SPSS 27 (IBM Corporation, Armonk, NY, USA) software for the analysis.
## 4. Results
Table 3 shows the results of multiple regression analysis for the perceived neighborhood walkability. Model 1 considered only the Walk Score’s nine amenities, and Model 2 included additional variables that measure the perception of the built environment. The results are as follows. First, in Model 1, the perceived neighborhood walkability was higher when the school had better accessibility, and the other eight amenities did not significantly associate with the perceived neighborhood walkability. In this context, Model 2 found that only access to banks was a significant variable related to the perceived neighborhood walkability among amenities consisting of the Walk Score. These results show that the accessibility to amenities is not significantly correlated to perceived neighborhood walkability. In other words, the Walk Score alone cannot explain the degree to which the pedestrians feel good about walking. It suggests that additional variables are required along with accessibility to amenities.
Second, this study examined the variables measuring the perception of the built environment in Model 2. Excluding the items with multicollinearity problems, 6 out of 17 variables were analyzed. The analysis confirmed that the perceived neighborhood walkability decreases as people feel uncomfortable with steep roads and stairs, similar to previous studies [24,44,45]. In addition, multiple alternative routes correlated statistically with perceived neighborhood walkability at 0.001 level. It means pedestrians think the walkability is higher when more alternative routes are available. This result is similar to a previous study showing that alternative routes to access the destination are likely to promote walking [29]. Meanwhile, pedestrians generally thought their neighborhood was more walkable with clear segregation between pedestrians and vehicles, and previous studies also confirmed this finding [27,28,30,38,46,47,48]. In the case of green spaces, there was a statistically significant correlation with the perceived neighborhood walkability at the significance level of 0.001. Pedestrians think the walkability is higher when they feel the neighborhood has visually rich green spaces, and this is similar to previous studies that proved that green spaces positively influence walking [24,27,49,50]. Traffic volume, odor, and smoke on the road did not show a statistically significant association with the perceived neighborhood walkability.
Third, the respondents’ gender, age, and weekly minutes of walking did not have a statistically significant correlation with neighborhood walkability. Xiao et al. found that women and older adults are more likely to walk [51]. In addition, a similar study found that the average number of walks per week positively affected emotional health but did not show significant results related to physical activity [52]. Although previous studies have shown that individual characteristics are significantly associated with walking, this study shows an insignificant association between individual characteristics and perceived neighborhood walkability.
## 5. Discussion
Several studies prove that the built environment has a significant relationship with walking, and interest in creating a walkable environment has increased accordingly. It used the Walk Score walkability index to understand neighborhood walkability. However, Walk Score has limitations as a walkability index in dense countries such as South Korea. Moreover, the Walk Score misses a qualitative indicator to measure the perception of the built environment. Previous studies argued that if the perception of the built environment from a qualitative perspective is considered along with quantitative walkability indicators, it will better measure neighborhood walkability [6]. Therefore, this study attempted to examine whether the accessibility to the nine amenities used in the Walk Score relates to the perceived neighborhood walkability. Furthermore, this study aimed to determine the correlation with the perceived neighborhood walkability, including the variables measuring the perception of the built environment.
The discussed contents based on the research results are as follows. First, it was confirmed that the Walk Score alone, which mainly considers accessibility to the destination, does not estimate the perceived neighborhood walkability. Model 1, which examined the relationship between accessibility to amenities and perceived neighborhood walkability, showed 0.027 explanatory power. Model 2, including variables measuring the perception of the built environment, showed a relatively high explanatory power of 0.335. The increase in explanatory power in Model 2 is due to the inclusion of perception variables. It also showed that the accessibility to most destinations in the Walk Score could not explain the perceived neighborhood walkability. Therefore, the Walk Score, which focuses on amenities, does not reflect neighborhood walkability and requires additional qualitative variables.
Second, this study proved that the perception variables of walkability in the built environment in a highly dense city are significantly related to perceived neighborhood walkability. Among the perception of the built environment variables, hills and stairs hindered the perceived neighborhood walkability. Hills and stairs are the elements that disturb walking, causing detours and risk of falls. Therefore, hills and stairs can be used to evaluate convenience and safety. In particular, since older people are more vulnerable to slopes and stairs, they can be considered when evaluating walkability by age group. Third, the diversity of alternative routes increased the perceived neighborhood walkability. Pedestrians perceived it as more pedestrian-friendly when more options were given to reach their destination because they could walk the preferred route. Similarly, previous studies have also argued that areas with good connectivity provide more routes [53]. Therefore, the alternative route is expected to be a significant variable when evaluating street connectivity in the future. Fourth, pedestrian segregation increased the perceived neighborhood walkability. The sidewalk is the most basic pedestrian infrastructure and is crucial for pedestrians deciding on the walking route [49,54]. Pedestrian segregation should be considered for walkability since it promotes pedestrians’ psychological safety. Accordingly, perceptions of pedestrian segregation can be included in the safety aspect. Fifth, green spaces bring pleasantness and enhance perceived neighborhood walkability. Pedestrians walk longer and are more satisfied when walking a route with abundant green spaces and well-managed [27,55]. Moreover, green spaces can be an indicator of identifying a pedestrian-friendly environment as it is an essential factor in determining health. Therefore, not only the accessibility to the park constituting the Walk Score but also the perceived green space can be considered for the evaluation item of the pedestrian friendliness index. Moreover, since green spaces can be identified using the Normalized Difference Vegetation Index (NDVI) and Google Street View, it will be more widely applicable if they are quantitatively evaluated in future research. These results proved that the perception variable of the built environment significantly affected the individual’s perceived walkability. Therefore, it is necessary to introduce additional qualitative variables and quantitatively measured built environments to evaluate neighborhood walkability in the future.
This study’s limitations and future research direction are as follows. First, it is necessary to verify whether the results of this study can be applied to cities other than Daegu and such research in the future will be efficient in understanding walkability in South Korea. This study was conducted in a large city, Daegu, but future research on small and medium-sized cities will be effective. Second, though this study used the Walk Score only, future research can expect higher policy applicability by using other qualitative and quantitative indicators such as the Walkability Index, Pedestrian Index of the Environment, and the Neighborhood Destination Accessibility Index, including the Walk Score to examine correlation with perception of the built environment. Lastly, this study was conducted on all adult groups aged 18 or older, but it is necessary to consider especially for the elderly who have mobility difficulties. This is because the level of perceived neighborhood walkability will vary depending on age. It can be an important future study to examine the built environmental factors that affect the neighborhood walkability of the elderly with restrictions on walking [56]. Furthermore, as attempted in Hirsch and colleagues’ study, it will be possible to examine the Walk Score and the mobility of older adults [57].
Despite these limitations, this study reviewed whether the Walk Score could explain the individual’s perceived neighborhood walkability in Daegu with the high density of the built environment. It further examined the correlation between the perceived neighborhood walkability and the perception of the built environment in addition to the Walk Score. This study found that the perception of the built environment had a more substantial influence on perceived neighborhood walkability than the accessibility to amenities. It proved that the Walk Score should include pedestrian perception and quantitative measurement. It is also meaningful in that it provided evidence that the perception that people experience should be accompanied by the quantitatively measured Walk Score.
## 6. Conclusions
Promoting walking is suggested as one of the physical activities for health in public health, transportation engineering, and urban planning. This study determines whether the accessibility to the destination, which consists of the Walk Score, is related to the perceived neighborhood walkability. We added the variables of perception of the built environment to investigate the relationship between the perceived neighborhood. As a result, the Walk Score alone could not explain the perceived neighborhood walkability. Moreover, it confirmed a correlation between the perception of the built environment and perceived neighborhood walkability. Therefore, this study argues that more qualitative factors should be included and suggests additional perceptions of the built environment must be considered in Walk Score. This study is expected to contribute to developing the Walk Score to the next level. |
# Risk of Subsequent Preeclampsia by Maternal Country of Birth: A Norwegian Population-Based Study
## Abstract
In this nationwide population-based study, we investigated the associations of preeclampsia in the first pregnancy with the risk of preeclampsia in the second pregnancy, by maternal country of birth using data from the Medical Birth Registry of Norway and Statistics Norway (1990–2016). The study population included 101,066 immigrant and 544,071 non-immigrant women. Maternal country of birth was categorized according to the seven super-regions of the Global Burden of Disease study (GBD). The associations between preeclampsia in the first pregnancy with preeclampsia in the second pregnancy were estimated using log-binomial regression models, using no preeclampsia in the first pregnancy as the reference. The associations were reported as adjusted risk ratios (RR) with $95\%$ confidence intervals (CI), adjusted for chronic hypertension, year of first childbirth, and maternal age at first birth. Compared to those without preeclampsia in the first pregnancy, women with preeclampsia in the first pregnancy were associated with a considerably increased risk of preeclampsia in the second pregnancy in both immigrant ($$n = 250$$; $13.4\%$ vs. $1.0\%$; adjusted RR 12.9 [$95\%$ CI: 11.2, 14.9]) and non-immigrant women ($$n = 2876$$; $14.6\%$ vs. $1.5\%$; adjusted RR 9.5 [$95\%$ CI: 9.1, 10.0]). Immigrant women from Latin America and the Caribbean appeared to have the highest adjusted RR, followed by immigrant women from North Africa and the Middle East. A likelihood ratio test showed that the variation in adjusted RR across all immigrant and non-immigrant groups was statistically significant ($$p \leq 0.006$$). Our results suggest that the association between preeclampsia in the first pregnancy and preeclampsia in the second pregnancy might be increased in some groups of immigrant women compared with non-immigrant women in Norway.
## 1. Introduction
Preeclampsia is a pregnancy complication affecting 3 to $5\%$ of women globally [1,2]. It is a leading cause of perinatal morbidity and mortality [2] as well as a risk factor for adverse long-term maternal health consequences including cerebrovascular and cardiovascular diseases [3,4]. Although the exact cause of preeclampsia is unknown, its risk strongly increases with higher maternal age, body mass index, interpregnancy weight change, gestational diabetes, and chronic hypertension [5,6]. Recent research has further highlighted an increased risk of preeclampsia in women with COVID-19 infection in early pregnancy [7]. Additionally, a genetic predisposition appears to increase the risk; women experiencing preeclampsia in a first pregnancy have a significantly increased risk of preeclampsia in a second pregnancy compared with those who do not develop the condition in the first pregnancy [8,9].
Previous studies of preeclampsia suggest that immigrant women overall have a lower risk of preeclampsia than women in the host population in the receiving countries [10,11,12,13]. This has been largely explained by the healthy migrant effect, in that women migrating from one country have better health at arrival than the general population in the receiving country [10,14]. However, more recent studies using maternal country of birth as the exposure show a more nuanced picture, with a higher risk of preeclampsia in refugees and women from low-income countries [13,15]. Thus, to better understand the variation in preeclampsia risk across immigrant groups in receiving countries, alternative hypotheses should be investigated.
In Norway, antenatal care services are offered free of charge and the use of interpreters is statutory [16,17]. However, previous studies suggest that subgroups of immigrant women giving birth in receiving countries may not receive intelligible information and recommendations given during pregnancy and childbirth [18,19]. They also report a low usage of interpreters in maternity care and difficulties navigating the healthcare system to gain information and receive appropriate care during pregnancy [18,19]. Due to such structural barriers to access healthcare, immigrant women may receive poorer quality of care during pregnancy compared with non-immigrants. It is therefore conceivable that some subgroups of immigrants may also be susceptible to complications and health problems during pregnancy.
As part of the postpartum follow-up program in Norway, all women with preeclampsia in a pregnancy should be informed of the high recurrence risk of preeclampsia in a subsequent pregnancy [20]. They should further be advised to avoid general risk factors for preeclampsia such as high interpregnancy weight gain [5]. However, if structural barriers reduce access to healthcare, this information may not be given or correctly understood, reducing the possibility to prevent preeclampsia in a subsequent pregnancy. If this information is not communicated in a tailored and intelligible manner in maternity care for immigrant women, we might expect a higher risk of recurrent preeclampsia in some immigrant groups compared with non-immigrant women.
To test this hypothesis and to identify the subgroups of immigrant women susceptible for preeclampsia, we examined the association of preeclampsia in a first pregnancy with the risk of preeclampsia in the second pregnancy across seven maternal regions of birth as defined by the Global Burden of Disease study (GBD).
## 2.1. Study Design
This population-based registry study used individual-linked data from the Medical Birth Registry of Norway (MBRN) and Statistics Norway. The linkage of data and identification of all pregnancies to the same woman was enabled through the national identity number assigned to all Norwegian residents. The MBRN comprises mandatory, standardized notification of all live- and stillbirths from 16 weeks of gestation (12 weeks since 2002) in Norway since 1967 [21]. The data include information on maternal health before and during pregnancy, and information on maternal and infant health during pregnancy, labor, and birth [21]. Statistics Norway collects, processes, and distributes official statistics in Norway [22]. Data comprise sociodemographic and migration-related factors about all individuals who are or have been a resident in Norway since 1990 [23].
## 2.2. Study Sample
We analyzed all women with first and subsequent births from 1990 to 2016 ($$n = 661$$,098 women with 1,322,870 pregnancies). In particular, women giving birth before 1990 or having their first child outside of Norway during the study period (i.e., women registered as multiparous at the first registered pregnancy in the MBRN) were not included in the initial source population. Furthermore, we focused our analyses only on women categorized as immigrant women (foreign-born with two foreign-born parents) and non-immigrant women (Norwegian-born with at least one Norwegian-born parent). Foreign-born women with one foreign-born parent and those born in Norway to two foreign-born parents (second generation immigrants) were not analyzed as these represented smaller heterogeneous groups. After performing these exclusions, our study sample contained 645,137 women with 1,291,947 pregnancies (Figure 1).
## 2.3. Preeclampsia
Preeclampsia was based on coding according to the International Statistical Classification of Disease and Related Health Problems, 8th (1990–98) and 10th revisions (1999 onwards). This coding corresponds with the criteria given by the Norwegian Society of Gynecology and Obstetrics, i.e., an increase in blood pressure (≥$\frac{140}{90}$ mmHg) combined with proteinuria (≥300 mg in a 24 h urine collection) after 20 weeks of gestation [20,24]. The diagnosis was recorded in the MBRN by open text (1990–1998) or by checkbox (from 1999 onwards). Validation studies covering two periods (1967 to 2005 and 1999 to 2010) [25,26] indicate that the registration of preeclampsia correlates well with medical records.
## 2.4. Region of Birth
Maternal country of birth was obtained from Statistics Norway. Due to the small numbers of preeclampsia in both the first and second pregnancies in the study population, we categorized maternal country of birth (immigrant women only) according to the seven super-regions defined by the GBD study [27,28] as follows: (i) Central Europe, Eastern Europe, and Central Asia; (ii) high income; (iii) Latin America and the Caribbean; (iv) North Africa and the Middle East; (v) South Asia; (vi) Southeast Asia, East Asia, and Oceania; and (vii) Sub-Saharan Africa. The high income regions contained women from the following countries: Southern Latin America, Western Europe, North America, Australasia, and high income Asia Pacific [28].
## 2.5. Other Variables
The MBRN also provided information on maternal age at birth (in years), year of childbirth, parity, and interpregnancy interval (in months). The interpregnancy interval was calculated as the time between the birth of a first child to an estimated conception of a second child (time of birth minus gestational age) to the same woman [29]. Length of residence (immigrants only) was calculated as the difference between year of childbirth of the first child (data from the MBRN) and year of official residence permit in Norway for the mother (data from Statistics Norway).
## 2.6. Statistical Analyses
All analyses were performed in Stata IC version 16 (Stata Statistical Software, College Station, TX, USA), using women as the study unit of analysis. Women with multi-fetal pregnancies were counted only once.
The analyses were organized in two parts (see Figure 1). First, we described absolute preeclampsia risk in first pregnancy and absolute recurrence risk in subsequent pregnancies up to the fourth pregnancy in the source population ($$n = 1$$,291,947 pregnancies). We additionally calculated the numbers for each subsequent pregnancy in these analyses. All calculations were performed separately for immigrants and non-immigrants overall and the results were visualized in a tree diagram using the approach by Hernández-Díaz et al. [ 8].
In the second part and the main analysis, we compared the risk of preeclampsia in the second pregnancy given preeclampsia status in the first pregnancy for women with at least two pregnancies and for each of the seven maternal GBD regions of birth ($$n = 1$$,102,559 pregnancies). Investigations of preeclampsia risk beyond the second pregnancy were not performed due to limited preeclampsia numbers for several immigrant groups of higher parities. The associations were estimated using log-binomial regression models and reported as crude and adjusted risk ratios (RRs) with $95\%$ confidence intervals (CIs), adjusted for chronic hypertension, year of first childbirth, and maternal age at first birth.
Finally, to investigate if the RR of preeclampsia in a second pregnancy after preeclampsia in the first pregnancy differed across the seven GBD regions, a likelihood ratio test was performed by comparing the log-likelihood for a model with and without an interaction term (preeclampsia in first pregnancy × GBD super-regions). A significant interaction term would indicate different effect estimates across groups.
In the sensitivity analyses, we excluded women with multi-fetal pregnancies and HELLP syndrome (hemolysis, elevated liver enzymes, and low platelet count). We also performed additional adjustments for education, interpregnancy interval, and length of residence (immigrants only) to account for other possible background differences between groups. We further adjusted for maternal body mass index for the years available (2008–2016) for immigrant and non-immigrant women overall. The results remained essentially the same.
## 2.7. Ethics and Public Involvement
This is an observational study approved by the Southeast Regional Committees for Medical and Health Research Ethics in Norway; reference number: $\frac{2014}{1278}$/REK Southeast Norway. Data were used under license for this study.
This study used standardized surveillance data. Patients were not involved in the development of the research question, outcome measures, design, or conduct of the study.
## 3. Results
The overall risk of preeclampsia in the study was $3\%$ ($5\%$ in the first pregnancy and $2\%$ in later pregnancies). The risk of preeclampsia in the first pregnancy for immigrants and non-immigrants was $2.9\%$ ($$n = 2965$$) and $4.8\%$ ($$n = 26$$,125), respectively.
Table 1 shows the relevant background characteristics in the sample of women with at least one subsequent pregnancy. Among immigrants, women from high income regions represented the largest group ($$n = 13$$,508 women) while the smallest group comprised women from Latin America and the Caribbean ($$n = 1445$$ women).
Figure 2 presents the risks of preeclampsia for up to four subsequent pregnancies in immigrant (Figure 2A) and non-immigrant (Figure 2B) women. Among those with preeclampsia in the first pregnancy, the risk of preeclampsia in the second pregnancy was $13.4\%$ ($$n = 250$$) for immigrants and $14.6\%$ ($$n = 2876$$) for non-immigrants. For women with a third pregnancy, the risk of preeclampsia in all three subsequent pregnancies was $21.3\%$ for immigrants and $28.7\%$ for non-immigrants (Figure 2).
The mean maternal age at first birth ranged from 24.9 [SD 3.9] to 29.9 [SD 4.4] years in immigrant women from South Asia and the high-income regions, respectively. Among women with two or more pregnancies, mean parity ranged from 2.2 [SD 0.5] in immigrant women from Latin America and the Caribbean to 2.8 [SD 1.1] in immigrant women from Sub-Saharan Africa. The mean interpregnancy interval between the first and second pregnancy ranged from 24 months [SD 22.6] in Sub-Saharan immigrants to 35 months [SD 29.4] in women from Latin America and the Caribbean.
Table 2 shows the crude and adjusted RR for preeclampsia in the second pregnancy for women with preeclampsia in the first pregnancy compared with women without preeclampsia in the first pregnancy. Immigrant women from Latin America and the Caribbean had the highest RR of preeclampsia in the second pregnancy (adjusted RR 17.4 [$95\%$ CI 8.1–37.4]), followed by immigrant women from North Africa and the Middle East (adjusted RR 14.9 [$95\%$ CI 10.5–21.3]). The lowest RR of preeclampsia in the second pregnancy was found in non-immigrant women (adjusted RR 9.5 [$95\%$ CI 9.1–10.0]). The difference in RR across regions of birth was statistically significant by the likelihood ratio test in both crude ($$p \leq 0.004$$) and adjusted ($$p \leq 0.006$$) regression models.
In immigrant women, those with preeclampsia in the first pregnancy were more likely to proceed with a second pregnancy compared with those who did not develop preeclampsia in the first pregnancy (Figure 2; $63\%$ and $56\%$, respectively), but no apparent group difference was seen for later pregnancies. For non-immigrant women, the likelihood of a second pregnancy was almost similar for those with and without preeclampsia in the first pregnancy (Figure 2; $76\%$ and $73\%$, respectively), but fewer women with previous preeclampsia had a third pregnancy ($29\%$ and $34\%$).
When excluding women with multi-fetal pregnancies ($$n = 26$$,086) and women with HELLP syndrome ($$n = 683$$), the results in Table 2 remained essentially the same. Furthermore, additional adjustment for education, interpregnancy interval, and length of residence (immigrants only) did not affect the results notably.
## 4. Discussion
In this study, we found that all women who experienced preeclampsia in the first pregnancy had a substantially increased risk of preeclampsia in the second pregnancy compared with women without preeclampsia in the first pregnancy, irrespective of the country of birth. We further showed that this association was stronger for immigrant women overall as well as for certain subgroups of immigrant women compared with non-immigrant women.
Our finding of a stronger association with preeclampsia in immigrant women compared with non-immigrant women may support our predefined hypothesis of the current study. The importance of follow-up and tailored information is crucial to reduce the subsequent risk of pathology in pregnancy [30]. All women developing preeclampsia in Norway should be carefully informed about the recurrence risk before entering a subsequent pregnancy [20]. They should also be advised not to gain interpregnancy weight as this increases the risk of recurrent preeclampsia [5]. Moreover, women with a history of preeclampsia should be advised to control their blood pressure early in a subsequent pregnancy [20]. This information is essential to increase the awareness of possible lifestyle adjustments and for the early detection of preeclampsia in subsequent pregnancies. However, due to possible structural communication barriers between immigrant women and the healthcare system [18,19], we hypothesized that immigrant women with preeclampsia in a first pregnancy to a lesser extent than non-immigrants receive or acquire sufficient preventive information on recurrent preeclampsia in a subsequent pregnancy. If our hypothesis is true, we therefore would expect a higher risk of subsequent preeclampsia in some immigrant groups compared with others.
To our knowledge, this is the first study to compare the RR for preeclampsia in a subsequent pregnancy between immigrant and non-immigrant women. Being the first study, the discussion of our results in comparison to previous studies is therefore challenging. However, in light of our hypothesis, it may be more interesting to compare our results with results from countries that immigrant women in Norway frequently migrate from. If the RR of subsequent preeclampsia in immigrant women in a receiving country is higher compared with the RR of data from a woman’s country of birth, our hypothesis of poorer communication in receiving countries may be supported. For example, in a hospital-based study in Tanzania, the RR of preeclampsia in a second pregnancy was reported to be 9-fold for women with a history of preeclampsia compared with those without a history [31]. In our study, we found that immigrant women from the Sub-Saharan African region overall had an almost 11-fold increased risk of preeclampsia in a second pregnancy. A higher RR in immigrant women compared with non-immigrant women may support our hypothesis of poorer communication between immigrant women and healthcare providers.
Although our results could support the communication barrier hypothesis, findings should be discussed in light of the large RR and their CIs. When comparing the RR across GBD regions, the RR varied from 10 to 18. However, the CI for these effect estimates largely overlapped the RR of non-immigrant women (see Table 2), except for immigrant women from North Africa and the Middle East (RR 15) as well as immigrant women from high income countries (RR 14). Further, when analyzing immigrants overall, we found that the RR for subsequent preeclampsia for immigrants and non-immigrants was 13 and 10, respectively. Despite the higher RR for preeclampsia in immigrants compared with that of non-immigrants, the RRs are large and the difference in RR between the groups is relatively small. We therefore should be careful to firmly conclude that immigrant women with preeclampsia in a first pregnancy are susceptible to a higher risk of preeclampsia in a second pregnancy compared with non-immigrant women.
Because our study did not directly measure the hypothesized communication barriers, we cannot be entirely certain that the difference in the RR between immigrants and non-immigrants is truly caused by poorer communication between immigrants and healthcare providers. There might be other potential mechanisms for the observed differences, including a genetic susceptibility for increased preeclampsia in some immigrant groups that we were not able to control for in our analyses. Further, the complexity of migration should not be underestimated [32,33] and the stressors related to the process of migration, i.e., unsafe migration routes, could have had an impact on our results. However, despite not accounting for these mechanisms, we would expect that the RR for some immigrant groups was lower than that found for non-immigrants. Instead, our results showed a consistently higher RR for all studied GBD groups, which may strengthen the hypothesis of communication barriers in immigrant women compared with Norwegian-born women.
Consistent with previous studies [11,13,15], we found that the overall risk of preeclampsia (the proportion of preeclampsia across all parities) was lower in immigrant than in non-immigrant women ($3\%$ vs. $5\%$). The lower overall risk of preeclampsia in immigrant women compared with non-immigrants has mainly been explained by the healthy immigrant effect [12], in that women moving to another country are healthier than the general population in the receiving country [34]. In this study, focusing on the preeclampsia risk in the second pregnancy given preeclampsia status in the first pregnancy, it appears that immigrants do not have a lower RR for preeclampsia in a second pregnancy. A plausible explanation for the diverging results of overall and subsequent risk of preeclampsia may relate to the genetic aspect of preeclampsia. Those who develop preeclampsia in a first pregnancy are at a genetically high risk of developing the condition in a subsequent pregnancy for both immigrant and non-immigrant women, irrespective of the healthy migrant effect.
Awareness of the risk of subsequent preeclampsia and preventive measures to reduce this risk in the second pregnancy is crucial for women with preeclampsia in the first pregnancy. Tailored information on the importance of follow-up during pregnancy to obtain the best compliance in maternity care is hence crucial for immigrant women. The main strengths of this study include the national population-based design, the standardized collection of data, and the large sample size. The large sample size and the long timespan of the study enabled a detailed analysis on the risk and subsequent risk for both immigrants and non-immigrants over time. By using the unique personal identification number, all pregnancies to the same woman were identified and enabled an accurate calculation of risk and subsequent risk up to a fourth pregnancy. Previous validation studies of preeclampsia diagnosis in the MBRN [25,26] have reported that the diagnosis correlates well with medical records, adding further strength to our study.
This study has some limitations. Because of the low number of recurrent preeclampsia cases in most countries, we grouped our study sample into broad GBD regions. This may have led to an underestimation or overestimation of the risk of preeclampsia for immigrant women from a specific country, which may further reduce generalizability to specific immigrant groups.
## 5. Conclusions
In this national population-based study of women with two or more pregnancies, both immigrant and non-immigrant women with preeclampsia in a first pregnancy had a substantially increased risk of preeclampsia in a second pregnancy compared with those without preeclampsia in a first pregnancy. The variation between GBD regions overall was not that strong; however, immigrant women from some GBD regions appeared to have a higher risk of preeclampsia in a second pregnancy than non-immigrants. Close follow-up for all women with a history of preeclampsia is important for early detection and possible treatment of the condition in a subsequent pregnancy. |
# Lifestyle Score and Risk of Hypertension in the Airwave Health Monitoring Study of British Police Force Employees
## Abstract
Background: Evidence suggest that promoting a combination of healthy lifestyle behaviors instead of exclusively focusing on a single behavior may have a greater impact on blood pressure (BP). We aimed to evaluate lifestyle factors and their impact on the risk of hypertension and BP. Methods: We analyzed cross-sectional health-screening data from the Airwave Health Monitoring Study of 40,462 British police force staff. A basic lifestyle-score including waist-circumference, smoking and serum total cholesterol was calculated, with a greater value indicating a better lifestyle. Individual/combined scores of other lifestyle factors (sleep duration, physical activity, alcohol intake, and diet quality) were also developed. Results: A 1-point higher basic lifestyle-score was associated with a lower systolic BP (SBP; −2.05 mmHg, $95\%$CI: −2.15, −1.95); diastolic BP (DBP; −1.98 mmHg, $95\%$CI: −2.05, −1.91) and was inversely associated with risk of hypertension. Combined scores of other factors showed attenuated but significant associations with the addition of sleep, physical activity, and diet quality to the basic lifestyle-score; however, alcohol intake did not further attenuate results. Conclusions: Modifiable intermediary factors have a stronger contribution to BP, namely, waist-circumference and cholesterol levels and factors that may directly influence them, such as diet, physical activity and sleep. Observed findings suggest that alcohol is a confounder in the BP–lifestyle score relation.
## 1. Introduction
Cardiovascular disease (CVD) is a leading cause of death worldwide, with cardiovascular incidents accounting for almost $85\%$ of total CVD mortality [1]. Hypertension, or high blood pressure (BP), is a major risk factor for cardiovascular morbidity [2] identified as the greatest single preventable cause of mortality worldwide [3].
Hypertension is highly influenced by well-established behavioral lifestyle risk factors, such as smoking and unhealthy diets, and other intermediary factors such as hyperlipidemia and central adiposity [4]. Promoting a healthy lifestyle is an effective approach for improving high BP; however, most studies supporting hypertension prevention recommendations assessed only the single effects of, for example, physical activity (PA) [5] or other lifestyle factors [6], and only a few investigated lifestyle factors concurrently, adding weight to the concept that multiple factors can exert a greater effect when considered together [7,8,9,10]. However, most available scoring systems, such as the QRISK1 and QRISK2 scores [11], used in the National Institute for Health and Care Excellence (NICE) guidelines (including obesity, smoking, serum cholesterol, and other factors [12]) and the American Heart Association’s Life’s Simple 7 (comprising seven modifiable behavioral factors—smoking, body mass index (BMI), PA, diet, cholesterol, BP, and fasting blood glucose [13,14]) have been established to reduce the risk of CVD. Whether these scores apply to the risk of hypertension is yet to be investigated. Further, risk factors included in previous studies were limited to either young adults or only a few risk factors at a time, thus not capturing the multitude of other lifestyle factors that may lower the risk of hypertension further, e.g., sleep [5,15,16,17,18,19,20,21].
Therefore, combining lifestyle factors instead of exclusively focusing on each may significantly impact BP [22] and can be more far-reaching since individual lifestyle recommendations showed differential effects in specific subgroups [23]. In light of this, to promote targeted interventions and identify which lifestyle factors have greater impact on BP/hypertension, the current study aimed to evaluate: a basic lifestyle-score (including available factors from the QRISK2 score [11]); individual lifestyle factors and their combined scores; and the inclusion of individual lifestyle factors to the basic-score. Cross-sectional data from the Airwave Health Monitoring Study, the first large cohort investigating the health of the police workforce in Great Britain [24], comprising a major resource for biomedical research with 42,112 enrolled by the end of 2012 [24] were used. Uniquely, this cohort allows for the consideration of job strain and working patterns specific to the police force, which could impact the achievement and maintenance of a healthy lifestyle [24], and will help in evaluating healthy lifestyle-factors in a population faced with unique occupational challenges.
## 2.1. Study Design
The study design and recruitment details have been published previously [24]. In brief, the study launched in 2004 and a total of 53,114 members of the police force were enrolled by end of 2015. All participants provided written informed consent, and the study ethics were approved by the National Health Service Multi-Site Research Ethics Committee (MREC/13/NW/0588). For this analysis, participants who attended health-screening measurements between 2007 and 2015 were included. Those diagnosed with diabetes or CVD and those with missing data of key variables required for this analysis, e.g., BP, PA, sleep duration, waist-circumference, smoking, and biochemical data were excluded ($$n = 12$$,652). The final sample included ($$n = 40$$,462) adults (25,382 men and 15,080 women).
## 2.2. Clinic Visit
Participants were invited for health screening at study clinics, where, following a standard protocol, trained staff conducted clinical examinations and average measurements were used in the analyses. Non-fasting venous blood samples were collected on-site and transported to the study-laboratory to assess levels of serum total and HDL cholesterol (IL650-analyser Instrumentation Laboratory, Bedford, MA, USA). All laboratory equipment were quality assured and controlled. Weight and height were measured twice with participants wearing light clothes, without shoes or socks using a Marsden H226 portable stadiometer and weighing scale. Waist-circumference was measured twice between the lower rib and the iliac crest in the mid-axillary line using a Wessex-finger/joint measure tape. BP was measured three times, 30 s apart, after participants were seated and relaxed (Omron HEM 705-CP, OMRON Corp., Kyoto, Japan). Hypertension was defined as having a systolic BP (SBP) ≥ 140 mmHg and a diastolic BP (DBP) ≥ 90 mmHg [25], or self-reported diagnosis or the intake of anti-hypertensive medication.
## 2.3. Socio-Demographic and Lifestyle Data
Participants completed a self-administrated electronic questionnaire providing socio-demographic and lifestyle data (e.g., age, sex, and education-level). Job strain was measured using the Karasek Job Content Questionnaire [26] which uses the quadrant approach [27] to categorize participants under high (low control, high demand), active/passive (high control, high demand)/(low control, low demand), and low strain (high control, low demand). Physical activity (PA) was assessed using the short version of the International PA Questionnaire [28]. The questionnaire asks participants to report the frequency and duration of domain-specific activities and energy expenditure in metabolic equivalent minutes/week, and based on this data, intensity of activities (high, moderate, or low) are assigned [28].
## 2.4. Dietary Data
A subsample of participants ($$n = 8546$$) completed 7-day food diaries to report their dietary intake. Photographs and common household measures developed by Nelson et al. were provided [29] for better portion-size estimation. Details on cooking methods and brand names were included. For quality-control, trained nutritionists/dietitians followed a study-specific operational manual to code the diaries and match food/drink items recorded to a UK Nutritional database code and a portion-size [30]. For nutrient-analysis, Dietplan software (version 6.7; Forestfield Software Ltd., Horsham, UK) based on the UK nutrient-database of McCance and Widdowson [31] was used.
## 2.5. Nutrient-Rich Food 9.3 Index-Score
Diet-quality was assessed using the Nutrient-Rich Food 9.3 (NRF9.3) index-score [32], reported to be highly correlated with the Healthy Eating Index, a measure of diet quality-score established by the US Dietary Guidelines [33]. For the NRF9.3 index-score calculation, the sum of the percentage of daily nutrient values of nine nutrients to encourage (protein, dietary fiber, vitamins A, C, E, calcium, iron, potassium, and magnesium) minus the sum percentage of maximum recommended values for three nutrients to restrict (saturated fat, added sugar, and sodium) per 100 kcal was computed. A higher NRF9.3 index-score reflects higher-nutrient quality per 100 kcal.
## 2.6. Lifestyle-Score
A basic lifestyle-score including available factors from the QRISK2 score [11]; waist-circumference, smoking and serum cholesterol (Table 1) was calculated. For the basic lifestyle-score, participants were stratified into three mutually exclusive categories: poor (0–3 points), intermediate (4 points), and ideal (5–6 points).
Additionally, other individual lifestyle factors likely to be on the causal pathway for the risk of hypertension (sleep duration, PA, alcohol intake, and diet quality) and their combined scores were calculated. Participants were also stratified into three mutually exclusive categories: poor, intermediate, and ideal. Each lifestyle factor was defined as poor, intermediate, and ideal, following the 2020 Impact Goals definitions [14]. Ethnic/gender-specific cut-offs for waist-circumference [12,34] were used. For PA, the American Heart Association guide for assessing PA was applied [35]. For sleep, the American Academy of Sleep Medicine and Sleep Research Society [36] guidelines were applied to identify poor (≤5 or ≥9 h), intermediate (6 h), and ideal (7–8 h) amounts of sleep. For diet quality, participants were classified based on published cut-offs of a similar UK sample population [37] into poor (NRF9.3 < 15), intermediate (NRF9.3 16–25), and ideal (NRF9.3 > 25) diet quality.
To evaluate the impact of these lifestyle factors on the basic lifestyle-score relative to BP/hypertension, additional scores were calculated by adding one factor at a time to the basic lifestyle risk-score, defined as follows: a basic lifestyle-score + sleep duration, a basic lifestyle-score + sleep duration + PA, a basic lifestyle-score + sleep duration + PA + alcohol intake, and a basic lifestyle-score + sleep duration + PA + alcohol intake + diet quality (in a subsample $$n = 8546$$).
## 2.7. Statistical Analysis
To calculate scores, ideal levels were given 2 points, intermediate 1 point, and poor 0 points. The sum of points for each lifestyle factor was used to calculate the cumulative score, with the lowest possible score being zero (poor levels of all factors) and the highest for all seven factors being 14 (ideal levels of all factors).
Baseline characteristics of participants were presented according to levels of the basic lifestyle-score (ideal (5–6 points), moderate (4 points), and low (0–3 points)) using a linear age, sex, and employment country-adjusted model to assess the linearity of the investigated relations.
Associations of lifestyle factors with BP were evaluated using multivariate linear-regression models adjusted for age, sex, and employment country. Subsequently, two sequential multivariate linear regression models adjusted for potential confounders were used to determine associations with BP for each 1-point higher basic lifestyle-score. Further, individual lifestyle factors and their combined scores were investigated in relation to BP. Finally, the relative impact of each lifestyle factor on the basic lifestyle-score was assessed by adding one factor at a time to the basic lifestyle-score. Logistic regression analysis was applied to estimate the odds of hypertension per total and levels of the lifestyle-scores. Stratified analyses and interaction terms were applied, detecting no evidence of the potential effect modification by age, sex, and BMI. Despite no evidence of effect modification, and given that the average age of participants was relatively young (mean = 40.4 (SD = 8.9) y), participants were stratified by age (≤30, 30 to ≤40, 40 to ≤50, >50 y) and the linear regression analysis was repeated to gain more insight into the relation with BP.
To investigate whether the main findings were independent of characteristics such as self-reported diagnosis of hypertension, antihypertensive drug use, and prevalent major chronic diseases (e.g., diabetes), the multivariate linear regression analyses were repeated in a sub-cohort of participants with characteristics that might bias the association between the basic lifestyle-score and BP. A sub-cohort of participants was identified with a self-reported diagnosis of hypertension and users of antihypertensive drugs and with prevalent cardiovascular diseases and diabetes mellitus from the foregoing cohort ($$n = 5686$$). Additionally, a sub-cohort excluding energy mis-reporters from 8546 participants who completed the dietary data was defined using the Goldberg equation ($$n = 7567$$) [38]. The SAS version 9.3 (SAS Institute, Cary, NC, USA) was used to perform the statistical analysis; p values < 0.05 were considered statistically significant.
## 3.1. Demographic and Lifestyle Characteristics of the Sample
The sample included 40,462 participants with an average age (mean (SD)) of 40.5 (8.9) years. Overall, $95\%$ of the participants were White and $63\%$ were men (Table 2). When participants were stratified by the basic lifestyle-score, about $30\%$ had poor, $26\%$ intermediate, and $44\%$ ideal lifestyle-score.
## 3.2. Association between the Basic Lifestyle-Score and BP/Hypertension
A 1-point higher basic lifestyle-score was associated with SBP/DBP differences of −2.05/−1.98 mmHg (Model 2; Table 3).
Logistic regression analyses showed a significant relationship between the basic lifestyle-score and the odds of hypertension (OR per 1 point increase = 0.72 ($95\%$CI: 0.70, 0.74)) (Model 2, Figure 1A and Table S1).
Across levels of the basic lifestyle-score, the odds of having hypertension decreased with scoring higher for the basic lifestyle-score, with ORs being 0.49 ($95\%$ CI: 0.46, 0.54) for intermediate level and 0.34 ($95\%$ CI: 0.32, 0.37) for the ideal level compared with the poor level (Model 2, Figure 2A and Table S1). Age-stratified multivariate regression analysis showed that the association between the basic lifestyle-score and BP was stronger in the older age groups (40 to ≤50 and >50 years), (SBP: −2.46 ($95\%$CI: −2.62, −2.29); DBP: −2.25 ($95\%$CI: −2.36, −2.14)) and (SBP: −2.34 ($95\%$CI: −2.70, −1.98); DBP: −1.72 ($95\%$CI: −1.93, −1.52)) compared to the younger age groups (Table S2).
## 3.3. Association of Individual Lifestyle Factors and Their Combined Scores with BP/Hypertension
A 1-point higher waist-circumference-score was associated with −3.63 mmHg lower SBP ($95\%$ CI: −3.80, −3.47) and a −3.53 mmHg lower DBP ($95\%$ CI: −3.64, −3.42). Similarly, smoking, cholesterol, sleep duration, PA, alcohol intake, and the NRF9.3 index-score were associated with lower SBP and/or DBP (Model 2; Table 3).
Logistic regression analyses only showed significant associations between waist-circumference, smoking, cholesterol, sleep duration, and PA scores, and the odds of hypertension (Model 2, Figure 1A and Table S1). Across levels of each individual lifestyle factor, the odds of having hypertension decreased with scoring higher for waist-circumference, cholesterol, sleep duration (only for ideal vs. poor level), and PA (Model 2, Figure 2A and Table S1).
When lifestyle-score factors (sleep duration + PA + alcohol intake + diet quality) were combined, the association attenuated with −0.18/−0.62 mmHg lower SBP/DBP (Model 2; Table 3).
Significant associations were observed between combined lifestyle-score factors and the odds of hypertension (Model 2, Figure 1B and Table S1). Across the levels of combined lifestyle-score factors, the odds of having hypertension decreased with scoring higher for combined lifestyle-score factors with OR being 0.80 ($95\%$CI: 0.69, 0.92) for the ideal level compared with the poor level (Model 2, Figure 2B and Table S1).
Age-stratified analysis showed comparable results of individual score relations to BP, with a trend of stronger associations between waist-circumference-score and BP in older compared to younger participants (Table S2). However, relations between the combined individual scores and BP attenuated and were no longer statistically significant in age-stratified analysis (Table S2).
## 3.4. Association of Inclusion of Individual Lifestyle Factors to the Basic Score with BP/Hypertension
The relative impact of each lifestyle factor on the basic lifestyle-score showed that the association with SBP and DBP attenuated when adding sleep duration, PA and diet quality, but remained statistically significant (Model 2, Table 3). However, the addition of alcohol intake to the basic lifestyle-score only slightly altered the results (SBP −1.91 ($95\%$ CI: −2.00, −1.81; DBP −1.85 ($95\%$ CI: −1.92, −1.79)) mmHg.
When alcohol was added to the basic lifestyle-score + sleep + PA, it did not further attenuate the results (SBP −1.07 ($95\%$ CI: −1.14, −1.00; DBP −1.27 ($95\%$ CI: −1.32, −1.22)) mmHg (Model 2, Table 3). Thus, in model 3, sleep + PA + diet quality was added to the basic model and adjusted for alcohol intake; however, the results remained the same.
The relationship with the odds of hypertension also attenuated but remained significant when all other lifestyle components (sleep duration, PA, alcohol intake, and diet quality) were added to the basic lifestyle-score (Model 2, Figure 1A and Table S1). Associations prevailed across the levels of lifestyle-score factors included in the basic score (Model 2, Figure 2A and Table S1).
For age-stratified analysis, lifestyle factors included in the basic lifestyle-score showed a stronger trend in the relation with BP among older (>50 y) compared to younger adults (≤30 y) (Table S2).
## 3.5. Association of Basic Lifestyle-Score with BP in Sub-Cohorts
The regression analyses were repeated using model 2 in the sub-cohorts that excluded the participants with characteristics that might bias the associations with BP (e.g., self-reported diagnosis of hypertension, antihypertensive drug use) (Table S3), and found that the results prevailed and remained statistically significant.
## 4. Discussion
The present large cohort study evaluated cross-sectional associations of lifestyle-scores in relation to BP/hypertension, reporting a 2.0 mmHg lower SBP (an epidemiologically significant difference at the population level [39]) and a $30\%$ lower risk of hypertension for each 1-point higher adherence to a basic lifestyle-score (including waist-circumference, smoking and serum cholesterol). When lifestyle factors were considered individually, only waist-circumference, low serum cholesterol level, and low alcohol intake contributed to a lower SBP and/or DBP and the risk of hypertension, which can be explained by a healthy waist-circumference and low serum cholesterol. Although significance of the associations prevailed, associations attenuated with the addition of sleep duration, PA, and diet quality. Although evaluated in a smaller subsample, a lifestyle-score including sleep duration, PA and diet quality did not show comparable BP-lowering benefits as the basic lifestyle-score. Significantly lower BP was observed with healthier lifestyle-scores in young adults (≤30 y), with a larger mean difference in BP in the older age group (>50 y) compared to younger age groups.
The relationships between lifestyle factors and BP found here are not surprising given that they were chosen a priori based on the existing literature demonstrating their relationship with BP [4]. It is likely that some lifestyle variables have a stronger contribution to lowering BP than others, namely, more objective ones including waist-circumference and cholesterol levels. Furthermore, when alcohol intake was added to the basic lifestyle-score, it did not further attenuate the results, suggesting that alcohol is a confounder in the relationship, given its relationship with both BP (the outcome) [40], and waist-circumference [41], smoking [42] and serum cholesterol [43] (the exposures). On the other hand, when other factors such as PA, diet, sleep duration, and smoking were added to the basic lifestyle-score, the association with BP attenuated, suggesting that these factors may act as mediators in the association of the basic lifestyle-score with BP. The relationship between these factors and cholesterol or waist-circumference has been well-established [44,45,46,47,48,49]. For example, the attenuation observed when diet and PA were added to the basic lifestyle-score may be attributed to their significant and direct impact on weight and serum cholesterol levels. This suggests that interventions focused on healthier diets and increased PA are important and have the potential to reduce BP and the risk of hypertension [44,50]. Even in young adults, <30 y, lifestyle-scores were related to lower BP, supporting findings that maintaining healthy behaviors from an early age can have favorable impacts on BP and a reduction in hypertension risk [51].
The scores evaluated as part of this work demonstrated a significant relationship with the odds of hypertension. Furthermore, the scores are also suggestive of the magnitude of risk with a more ideal lifestyle being associated with a lower risk of hypertension than an intermediate lifestyle, thus, demonstrating the potential value of the score for assessing hypertension risk. Importantly, although the addition of sleep, PA, and diet attenuated the association of the basic lifestyle-score with SBP/hypertension, the lifestyle-score including only sleep, PA and diet (although in smaller subsample) did not show a lower BP/hypertension comparable to the basic lifestyle-score. This suggests that the basic-score cannot be merely replaced by the lifestyle-score including sleep, PA, and diet in this population.
The present study fills a gap in evaluating the combined impact of several lifestyle factors on BP/hypertension and uses several validated measures for assessing lifestyle data including the International PA Questionnaire [28] and the NRF9.3 [32]. The study used cross-sectional data and therefore a temporal relationship between hypertension and lifestyle factors cannot be established. As with any interview-based data collection, some variables used in the lifestyle-score were subject to misreporting or recall bias. Another consideration is that the Airwave Health Monitoring Study recruits from a distinctive population—those working in the police force [24]. As such, it provides a novel opportunity to study a population with unique occupational challenges. However, the generalizability of the research conducted in this cohort may be limited with the study population being predominantly male with a small proportion of staff from ethnic minorities. It is unknown how well the results can be generalized to the UK population at-large, nor to populations outside of the UK, although underlying biological pathologies are likely to be similar in other groups. Future work can aim to validate and assess the reliability of this tool in the current and other cohorts.
## 5. Conclusions
Given the pervasiveness of hypertension and its contribution to mortality worldwide [3], identifying which lifestyle behaviors impact hypertension risk the most is valuable. This study highlights the value of objective factors including waist-circumference and cholesterol levels that suggested a stronger contribution to BP than others. The combined impact of lifestyle behaviors suggests that alcohol is a confounder in the BP–lifestyle score relationship, and suggests that factors influencing weight, such as diet and PA, may be important in managing the risk of hypertension. Strategies to adopt healthy behaviors may be useful to lower BP and manage hypertension risk by clinicians, researchers, and members of the public, even in young adulthood. |
# Association between Mothers’ Emotional Problems and Autistic Children’s Behavioral Problems: The Moderating Effect of Parenting Style
## Abstract
Mothers’ emotional problems are associated with autistic children’s behavioral problems. We aim to test whether parenting styles moderate associations between mothers’ mood symptoms and autistic children’s behavioral problems. A sample of 80 mother–autistic child dyads were enrolled at three rehabilitation facilities in Guangzhou, China. The Social Communication Questionnaire (SCQ) and the Strengths and Difficulties Questionnaire (SDQ) were used to collect the autistic symptoms and behavioral problems of the children. Mothers’ depression and anxiety symptoms were measured using the Patient Health Questionnaire 9 (PHQ-9) and the General Anxiety Disorder 7-item (GAD-7) scale, respectively, and parenting styles were measured using the Parental Behavior Inventory (PBI). Our results show that mothers’ anxiety symptoms were negatively associated with their children’s prosocial behavior scores (β = −0.26, $p \leq 0.05$) but positively related to their social interaction scores (β = 0.31, $p \leq 0.05$). Supportive/engaged parenting styles positively moderated the effects of mothers’ anxiety symptoms on their prosocial behavior score (β = 0.23, $$p \leq 0.026$$), whereas hostile/coercive parenting styles had a negative moderation (β = −0.23, $$p \leq 0.03$$). Moreover, hostile/coercive parenting styles positivity moderated the effects of mothers’ anxiety symptoms on social interaction problems (β= 0.24, $p \leq 0.05$). The findings highlight, where mothers adopted a hostile/coercive parenting style while experiencing high anxiety, their autistic child may have more serious behavioral problems.
## 1. Introduction
Children with autism spectrum disorder (ASD) may exhibit persistent deficits in social communication and social interaction as well as restricted and repetitive patterns of behavior [1]. Regardless of the core symptoms, behavioral problems are also commonly observed but difficult to manage among children with ASD [2]. Thus, parenting children with ASD is challenging [3,4,5] and mood symptoms and disorders are widely reported in parents of autistic children [6,7,8]. According to a previous study, mothers with stable and positive moods are supportive of functional improvements in their children with ASD; thus, providing support for mood and mood-related problems in mothers of children with ASD is beneficial for the child and the mother [9].
We reviewed previous studies regarding parents’ emotional and child behavioral problems, research in the general population has identified well-established connections between parents’ mental health difficulties and the symptoms and behavioral problems of their children [10,11,12]. A number of research studies have reported on the relationship between mothers’ mood symptoms and behavioral problems in children with ASD [13], and those with autistic symptoms [8,14]. Furthermore, children with ASD would have a better prognosis, including social interaction, attention problems, and hyperactivity/inattention symptomatology, when parents had healthy and positive emotional states [9,15,16]. Previous research had limitations, such as a lack of focus on the possible moderating variables that could explain the relationship between anxiety and depression symptoms in mothers and the behavioral problems and symptoms of their children with ASD. Thus, understanding the role of mothers’ depression and anxiety symptoms is important in the development and maintenance of behavioral difficulties in children with ASD.
Research has established that parenting has a critical influence on a child’s development, such as their social development [17] and self-esteem [18], and is also related to the quality of the parent-child relationship [19]. A series of studies has shown that parenting styles are associated with the behavioral problems of children with ASD, such as externalizing problems [20] or internalizing behavioral problems [21]. In addition, parenting styles may affect the intervention and rehabilitation of ASD, for example, a positive parenting style predicts better social competence in children with ASD [22,23]. However, mothers with anxiety symptoms have a more negative parenting style, such as intrusive involvement in anxious children compared to children with typical development (TD) [24]. Hentges et al. found that mothers’ depression has an indirect effect on internalizing problems in children with TD via hostile parenting [25]. Thus, understanding how parenting style interacts with mothers’ mood problems and autistic children’s behavioral problems will help with the development of targeted interventions, as well as understanding whether parenting style may worsen or protect against these effects.
In the current study, we recruited 2–12-year-old autistic children and their mothers to examine the possible moderating influence of parenting style in the association between the mothers’ mood problems and their autistic children’s behavioral problems. Based on existing research, the current study holds several hypotheses: [1] Mothers with higher levels of anxiety and depression will have autistic children with more behavioral problems; [2] Parenting styles will moderate the association between mothers’ moods and behavioral problems in autistic children. More specifically, those with hostile/coercive parenting styles and high levels of anxiety and depression are associated with more behavioral problems in their autistic children. To the best of our knowledge, this study is the first to consider the effects of mothers’ moods and parenting styles on their autistic children’s behavioral problems.
## 2.1. Participants and Procedures
Data were from an ongoing study that started in September 2020 in Guangzhou, China. Children from ages 2 to 12 years were recruited from three special schools affiliated with the Guangzhou Disabled Persons Federation. The inclusion was restricted to children diagnosed with ASD by a child psychiatrist (according to DSM-V). Children with other neurodevelopmental abnormalities, such as epilepsy and cerebral palsy, and physically handicapped were excluded. A total of 110 parent–child (with ASD) dyads agreed to participate in the study. After excluding all questionnaires submitted by fathers, we were left with 80 mother–child dyads for the final analysis. Written informed consent from the mothers had been obtained before the questionnaire and behavior evaluations were completed. The mothers were asked to finish a structured questionnaire (including anxiety, depression symptoms, parenting styles, child behavioral problems, and demographic information, described below), with a uniform guide. Autistic children were assessed for cognition by licensed researchers who had been in standardized training. This study was approved by the Ethics Committee of the School of Public Health at Sun Yat-sen University.
## 2.2. Scales in the Questionnaire
Mothers’ anxiety: The General Anxiety Disorder 7-item (GAD-7) scale, a brief, seven-item self-report scale designed to assess generalized anxiety in mothers was used to evaluate anxiety symptoms [26]. Each of the seven items is scored from 0 (not at all) to 3 (nearly every day). The total GAD-7 scale score ranges from 0 to 21. A higher score indicates greater symptoms on the GAD-7. The current study used the recommended mild-to-severe cut-off scores for anxiety (GAD-7 ≥ 5) to classify subjects with or without a history of anxiety. The Cronbach’s α among Chinese was 0.898 [27].
Mothers’ depression: Symptoms of depression were evaluated by Patient Health Questionnaire 9 (PHQ-9), a nine-item self-report scale designed to assess symptoms of depression in mothers [28]. Each of the nine items can be scored from 0 (not at all) to 3 (nearly every day), and the total scale score ranges from 0 to 27. A higher score indicates greater symptoms on the PHQ-9. The current study used the recommended mild-to-severe cut-off scores for anxiety (PHQ-9 ≥ 5) to classify subjects with or without a history of anxiety. The Cronbach’s α among Chinese was 0.85 [29].
Parenting style: We used the Parental Behavior Inventory (PBI) to evaluate the mothers’ parenting styles. The PBI was designed by Love-Joy [30]; it is a parent’s self-evaluation of their parenting behavior with preschool and junior school children. It is a 20-item self-rated questionnaire including support/participation, and hostility/coercion parenting styles. The Cronbach’s α of support/participation and hostility/coercion were 0.807 and 0.652, respectively in Chinese [31].
Children’s behavioral problems: We used the Strengths and Difficulties Questionnaire (SDQ) to evaluate the behavioral problems of children. Mothers were asked to complete the extended version of the SDQ for children with ASD. The SDQ is a 25-item questionnaire that represents a problem of hyperactivity/inattention (SDQ-HA, 5 items), emotional symptoms (SDQ-ES, 5 items), peer problems (SDQ-PP, 5 items), conduct problems (SDQ-CP, 5 items), and prosocial behavior (SDQ-PB, 5 items); it is designed to assess the behavioral and emotional problems in children and adolescents [32]. Each of the 25 items is rated as being not true [0], somewhat true [1], or certainly true [2], and each of the SDQ subscales consists of five items, thereby yielding scores between 0 and 10. The hyperactivity/inattention, emotional symptoms, peer problems, and conduct problems subscales produce a score for total difficulties, which can range between 0 and 40. A higher score indicates more deficient functioning. For the strength score of the prosocial subscale, a higher score indicates better functioning. The current study used the recommended cut-off scores for each subscale (SDQ-HA ≥ 7, SDQ-ES ≥ 7, SDQ-PB ≤ 4, SDQ-PP ≥ 6, and SDQ-CP ≥ 5) to classify children with ASD as with or without a history of behavioral problems. The SDQ has good reliability and structural validity in Chinese individuals [33].
Autistic behaviors: The Social Communication Questionnaire (SCQ) was used to evaluate the core symptoms of autism. The SCQ scale is a 40-item scale for parents or caregivers designed as a brief screening measure of ASD [34]. The items are based on those with the most discriminative diagnostic efficacy in ADI-R. The SCQ is mainly divided into three areas, namely, the social interaction domain (S), the communication domain (C), and the restricted, repetitive, and stereotyped patterns of behavior domain (R). All items were answered by “Yes” or “No” (0 = no abnormal behavior, 1 = abnormal behavior). A higher score indicates greater symptoms on that subscale. Cronbach’s α coefficient for the present study was 0.89.
## Demographic Information
Baseline characteristics were recorded using written questionnaires, including the mother’s age, education, ethnicity, age and gender of the child with ASD, family income, and the number of family members.
## 2.3. Statistical Analysis
The study was designed to answer another question, but the collected data were used here to address the current questions. SPSS v23.0 statistical software was used to conduct statistical analysis. The descriptive statistics for continuous variables were presented as the mean (M) and standard deviation (SD), and the count data were described by prevalence (%). Pearson’s correlations were conducted on maternal anxiety and depression symptoms, parenting style, and behavioral problems in children with ASD. Multiple linear regression was used to determine if parenting style moderated the associations between mothers’ anxiety or depression symptoms and behavioral problems in children with ASD. We added mothers’ anxiety or depression symptoms and the parenting style in the first step, and the interaction of mothers’ anxiety or depression symptoms and the parenting style in the second step. All regression analyses included children’s gender and age, family income, and maternal education as covariates in the third step. Standardized regression coefficients presented all betas, and the significant level was $p \leq 0.05.$ A simple slope analysis was conducted using the Process 2.16 macro plug-in of SPSS 23.0.
## 3.1. Demographic Information
Table 1 outlines the sample demographic information for the children included in the analysis. In children with ASD ($87.5\%$ boys), 53 ($66.3\%$) children were under 6 years old. As for the mothers, $69.3\%$ were more than 35 years old, $72.5\%$ have low education (less than 9 years), and $76.3\%$ have a family income of less than 8000 yuan per month.
## 3.2. Prevalence of Behavioral Problems in Children with Autism and Mothers’ Emotional Problems
The prevalence of abnormal SDQ-HA, SDQ-ES, SDQ-PB, SDQ-PP, and SDQ-CP behavioral problems (shown in Supplementary Table S1) among children with ASD was 53 ($66.3\%$), 5 ($6.3\%$), 62 ($77.5\%$), 71 ($88.8\%$), and 13 ($16.3\%$), respectively. In addition, the prevalence of depression and anxiety symptoms in mothers with children with ASD was 31 ($38.8\%$) and 38 ($37.5\%$), respectively. The mean and standard deviation of the score of the supportive/engaged parenting style was 33.03 ± 7.74, and that of the hostile/coercive parenting style was 17.91 ± 6.49.
## 3.3. Correlation Analysis
In the correlation analysis between the mothers’ mood symptoms and children’s symptoms (Table 2), the mothers’ depression along with anxiety symptoms were positively associated with the children’s hyperactivity score ($r = 0.28$, 0.29 for depression and anxiety, respectively; $p \leq 0.05$) and negatively associated with the prosocial behavior score (r = −0.27 −0.26 for depression and anxiety, respectively; $p \leq 0.05$). In addition, mothers’ depression symptom was related to a higher conduct problems score ($r = 0.25$, $p \leq 0.05$). As for the parenting style, supportive/engaged was associated with lower SCQ scores in social (r = −0.31, $p \leq 0.05$), repetitive (r = −0.33, $p \leq 0.05$), and SCQ total score (r = −0.33, $p \leq 0.05$); however, hostile/coercive was associated with a higher score in the SDQ total score ($r = 0.24$, $p \leq 0.05$) and communicating domain ($r = 0.24$, $p \leq 0.05$).
## 3.4. Relationship between Mothers’ Anxiety Symptoms and Children’s Prosocial Behaviors Measured by SDQ Moderated by Parenting Style
After adjusting for the children’s gender and age, family income, and mothers’ education, multiple linear regression analysis showed that mothers’ anxiety symptoms were negatively associated with children’s prosocial behavior (β = −0.26, $p \leq 0.05$); and supportive/engaged parenting style had a marginally positive relationship to children’s prosocial behavior (β = 0.21, $$p \leq 0.051$$). In addition, a supportive/engaged parenting style positively moderated the effect of mothers’ anxiety symptoms on children’s prosocial behavior (β = 0.23, $$p \leq 0.026$$); conversely, a hostile/coercive parenting style negatively moderated the effect of mothers’ anxiety on children’s prosocial behavior (β = −0.23, $$p \leq 0.031$$, see Table 3). The negative results of mothers’ depression and children’s behavioral problems collected in the SDQ are shown in Supplementary Table S2.
We used a simple slope analysis taking into account the effect of the interaction between mothers’ anxiety symptoms and supportive/engaged parenting style on children’s prosocial behavior. As an example, we set the two special values of mothers’ anxiety symptoms and supportive/engaged parenting style as a standard deviation above and below the average. In addition, we calculated the simple slope of maternal anxiety symptoms on children’s prosocial behavior when a supportive/engaged parenting style was high/low. The results showed that mothers’ anxiety symptoms were negatively associated with children’s prosocial behavior when the supportive/engaged domain was low (b = −0.282, $p \leq 0.01$), whereas there was no significant association when the supportive/engaged domain was high (b = −0.212, $p \leq 0.05$, see Figure 1a).
For high levels of hostile/coercive parenting styles, mothers’ anxiety symptoms were negatively associated with children’s prosocial behavior (b = −0.300, $p \leq 0.01$), but had no association when the hostile/coercive domain was low (b = −0.063, $p \leq 0.05$, see Figure 1b).
## 3.5. Effects of Mothers’ Anxiety Symptoms on Children’s Social Interaction Moderated by Parenting Style and Measured by SCQ
After adjusting for children’s gender and age, family income, and mothers’ education, multiple linear regression analysis showed that hostile/coercive parenting styles positively moderated the effect of mothers’ depression on children’s social interaction (β = 0.24 for hostile/coercive; $p \leq 0.05$; see Table 4). Supportive/engaged marginally moderated the effect of mothers’ depression on children’s social interaction (β = 0.20; $$p \leq 0.052$$; see Table 4). The results of mothers’ depression and children’s other behavioral problems by SCQ are shown in Supplementary Table S3.
The simple slope showed that mothers’ anxiety symptoms were positively associated with children’s social interaction with a high hostile/coercive parenting style ($b = 0.423$, $p \leq 0.01$) but not with a low hostile/coercive parenting style ($b = 0.114$, $p \leq 0.05$, see Figure 2).
## 4. Discussion
Parenting a child with autism is difficult; emotional problems, including depression and anxiety, have been widely reported among parents of autistic children [8,35]. By using a cross-sectional study, we tested if parenting styles moderated the association between mothers’ emotional symptoms and autistic children’s behavioral problems and social communication. The main findings of this study confirmed that [1] mothers’ anxiety and depression symptoms are positively associated with the severity of behavioral problems among children with ASD; and [2] moreover, when parenting style is low supportive/engaged, or high hostile/coercive, mothers’ anxiety symptoms are associated with a decrease in children’s prosocial behavior or an increase in social interaction problems.
Our findings indicated that mothers’ anxiety was associated with less prosocial behavior or more social interaction problems in autistic children [8,14]. We proposed several mechanisms to explain why a negative parenting style may worsen the situation. [ 1] More anxiety symptoms can cause mothers to adopt negative parenting strategies, such as aggressive behavior and violence [36], thereby eroding children’s self-esteem and their ability to regulate their own emotions [37] and increasing children’s behavioral problems. On the contrary, a supportive parenting style will provide children with a positive parent–child interaction, which the child may use when interacting with others, thereby benefitting their peer relationships and social belonging, having enough social support when encountering problems, and decreasing the rate of problem behaviors [38,39,40,41]. [ 2] Research on children with TD showed that negative parenting behaviors, such as being hostile/coercive, can function as a risk factor during children’s behavioral problem development; whereas positive parenting behaviors can be a protective factor [25,42]; children with ASD may share the same mechanism. [ 3] Mothers’ anxiety symptoms lead to intrusive, hostile, and neglectful behaviors, as well as less involvement in parenting their children [43,44], which may cause problematic parent–child interactions, thereby finally decreasing the prosocial behaviors of children with ASD [45].
A negative correlation between mothers’ depression symptoms and children’s prosocial behavior [46,47] is found among TD, which is similar to our findings in autistic children. This finding may be attributed to the incapability of depressed mothers to respond to their children’s needs, which limits the whole family’s initiative to seek proper intervention for the children [48]. Conversely, mothers’ internal physiological changes can be inherited by their children, which can increase the emotional and behavioral problems of autistic children [49]. However, the relationship between mothers’ depression symptoms and children’s behavioral problems is not moderated by parenting style in the current study, which can be attributed to the limited sample size.
The present study found that mothers of children with ASD had a comparable rate of depression and anxiety symptoms ($38.8\%$ and $37.5\%$, respectively) with previous research [50,51,52], which confirmed the elevated risk of depression among mothers of autistic children. We also compared our rate with the rate from other regions of China [8,53]. We found that anxiety and depression symptoms among mothers with children with ASD were more prevalent in the present study. In addition, our study addressed the most salient problem behaviors in autistic children, which were hyperactivity/inattention, prosocial behavior, and peer problems; this was similar to the previous research [54,55]. Our current findings corroborate and further attest that children with ASD often exhibit co-occurring behavioral problems. According to Coplan et al., pragmatic language is a possible pathway in the development of behavioral problems, as it plays an important role in children’s communication with peers, especially in the school-age period. Children solve problems and achieve social goals through adequate language skills, which, for autistic children, are lacking [56]. Thus, limited language skills may make them feel insecure about engaging in peer relationships and cause more problematic behaviors [57]. Moreover, previous studies suggested that other features of autism such as sensory difficulties [58] or resistance to change [59] also caused more problem behaviors.
Our results also confirmed that a hostile/coercive parenting style was positively related to hyperactivity and the total behavioral problem level of autistic children (see Supplementary Table S2), which was reported by Maljaars et al. [ 21]. The coercion theory [42] proposes that in a coercive cycle, aversive child behaviors reciprocally influence parenting behaviors, which results in the negative reinforcement of undesirable behaviors in children and parents [60]. Conversely, a positive parenting style is associated with the prosocial behaviors of children with TD [61,62]. This finding is in line with ours (see Supplementary Table S3), which also supported the coercion theory.
This study has a few limitations. First, in view of the cross-sectional design, the causal relationship was not addressed, and the negative finding of an interaction effect between parenting style and depressive mood could be attributed to the relatively small sample size. We recommend a longitudinal study with a larger sample size to clarify the potential mechanism of the association. Second, our study did not consider the role of fathers; the parenting style and emotional symptoms of fathers should be considered in future studies. Third, as there was no normative or clinical comparison group, a more robust design should be considered. Fourth, a limitation of our study was that we used maternal reports of children’s behavioral problems with a single informant (i.e., the mother). Future studies should include multiple informants, such as teachers and fathers to further explore these associations.
## 5. Conclusions
In this research, we confirmed a high rate of anxiety and depression symptoms among autistic children’s mothers, as well as behavioral problems of autistic children. High levels of anxiety and depression in mothers are linked to more behavioral problems in their autistic children. Negative parenting styles, such as low supportive/engaged or high hostile/coercive, further enhance the association between mothers’ mood problems and less prosocial behaviors, and more serious social interaction problems among these children. So far most parenting programs aimed at parents of children with ASD have focused on improving communication in children; studies addressing parenting strategies are limited [63]. Thus, we propose that parents may need more support in coping with emotional problems and improving their parenting skills to decrease the problem behavior of autistic children. |
# mtR_find: A Parallel Processing Tool to Identify and Annotate RNAs Derived from the Mitochondrial Genome
## Abstract
RNAs originating from mitochondrial genomes are abundant in transcriptomic datasets produced by high-throughput sequencing technologies, primarily in short-read outputs. Specific features of mitochondrial small RNAs (mt-sRNAs), such as non-templated additions, presence of length variants, sequence variants, and other modifications, necessitate the need for the development of an appropriate tool for their effective identification and annotation. We have developed mtR_find, a tool to detect and annotate mitochondrial RNAs, including mt-sRNAs and mitochondria-derived long non-coding RNAs (mt-lncRNA). mtR_find uses a novel method to compute the count of RNA sequences from adapter-trimmed reads. When analyzing the published datasets with mtR_find, we identified mt-sRNAs significantly associated with the health conditions, such as hepatocellular carcinoma and obesity, and we discovered novel mt-sRNAs. Furthermore, we identified mt-lncRNAs in early development in mice. These examples show the immediate impact of miR_find in extracting a novel biological information from the existing sequencing datasets. For benchmarking, the tool has been tested on a simulated dataset and the results were concordant. For accurate annotation of mitochondria-derived RNA, particularly mt-sRNA, we developed an appropriate nomenclature. mtR_find encompasses the mt-ncRNA transcriptomes in unpreceded resolution and simplicity, allowing re-analysis of the existing transcriptomic databases and the use of mt-ncRNAs as diagnostic or prognostic markers in the field of medicine.
## 1. Introduction
Mitochondria are organelles present within all eukaryotic cells, performing oxidative phosphorylation [1] and apoptosis processes [2], among others. Metazoan mitochondria possess their own genomes, which are relatively small (usually 15–20 kb) and contain 14 to about 40 genes, typically 37 in vertebrates [3]. Owing to the multiple cellular copies of mitochondrial DNA, the abundance of mitochondrial transcripts can range from 5 to $30\%$ (depending on the cell type) of the total cellular RNA [4,5]. Mitochondrial non-coding RNAs (mt-ncRNAs) are referred as those encoded in the mitochondrial genome, although nuclear genome-encoded non-coding RNAs (ncRNAs) can be present in mitochondria [6]. Both mitochondrial small non-coding RNA (mt-sRNA) and long non-coding RNA (mt-lncRNA) have been identified both inside mitochondria and in other cellular compartments, and some of their implicated gene regulatory functions have been proposed [4,5,7,8,9]. Despite the growing evidence of regulatory functions of mt-ncRNAs, no appropriate bioinformatic tools to identify them are available up to date.
There are tools such as MITOS [10] or DOGMA [11] to annotate mitochondrial genome, but these tools cannot identify and quantify mt-ncRNAs. Although DOGMA can annotate nucleotide sequences to the mitochondrial genome, the tool requires the entire mitochondrial genome sequence as input and does not work with mt-ncRNAs, which are much shorter. The current analysis of the high-throughput sequencing data relies on the use of tools designed for the nuclear genomic RNA. These tools, as well as DOGMA, cannot identify mt-sRNAs effectively for mt-sRNAs, and frequently have non-templated additions, as well as sequence and length variants [12]. Tools such as tDRmapper [13], SPORTS [14], or MINTmap [15] can be used to analyze mitochondrial tRNA derived fragments (mt-tRFs). However, there is no tool to simultaneously analyze all small RNAs (sRNA) mapping to the mitochondrial genome.
Most tools designed for small RNA data analysis deploy a three-step procedure with some minor modifications [16]. This includes: [1] read count generation, [2] mapping the unique set of sequences to a reference FASTA, and [3] parsing the mapped output files. Read count generation is the most time-consuming step, but it can be significantly reduced by parallelizing the processes on all the available CPU cores. We have developed mtR_find, a bioinformatic tool for identification, annotation and analysis of mtRNA in new or existing transcriptomic datasets produced in any type of sequencing technology. mtR_find uses PYTHON’s multiprocessing functionality that helps to parallelize the analysis of multiple sequencing files for read count generation, thereby massively reducing the data processing time. Along with the tool, we propose a nomenclature to encompass the mt-RNA specificity. The tool allows retrieving the important biological information from the existing datasets in a high-throughput mode in an unpreceded efficiency.
## 2.1. Performance
The total read counts for the three datasets were: 332.3 million (dataset-1, sRNA-seq of liver samples from malignant tumor tissue of HCC patients and non-malignant tissue from uninfected individuals), 318.2 million (dataset-2, sRNA-seq of semen samples from lean versus obese men), and 93.4 million (dataset-3 (RNA-seq of mouse oocytes); Supplementary File S1). The sRNA datasets were analyzed through parallel processing by mtR_find, and the total execution time for datasets 1 and 2 was 3 min 44 s and 2 min 38 s, respectively. For comparison, the total execution time using MINTmap for dataset-1 and dataset-2 was 48 min 2 s and 34 min 7 s, respectively. The mt-lncRNA analysis was not performed using the parallel processing due to pickling limitations in PYTHON multiprocessing module [17]), and the total execution time was 11 min 29 s. The duration of serial execution of datasets 1 and 2 was 9 min 9 s and 11 min 40 s, respectively. Consequently, the serial execution took ~2.75 times longer than the parallel execution, indicating the efficiency of parallel execution. Besides parallel execution, there are other differences in the way the tool handles mt-sRNAs and mt-lncRNAs. The tool does not consider sequences longer than 50 nt for mt-sRNA computation and shorter than 50 nt for mt-lncRNA. For mt-sRNA, every single sequence is considered unique by the tool. For mt-lncRNA, the tool outputs the unique sequence count and, in addition, the counts of lncRNA sequences with same 5′ end but variable 3′ end are summed together. In addition to mt-lncRNAs that are longer than 200 nt, mt-lncRNA option of mtR_find also identifies ncRNAs that are 50–200 nt long, which are categorized as mid-size or intermediate RNAs. In order to study only lncRNAs that are longer than 200 nt, users can use the “—filter 200” argument as a command line option while running mtR_find.
## 2.2. Read Statistics
Datasets-1, -2, and -3 had, respectively, 36,136, 93,128, and 9222 unique sequences with a total read count greater than 200 (Supplementary Files S2–S4). The numbers of sequences that mapped to the mitochondrial genome were 2120 (constituting $1.2\%$ of total reads), 8899 ($4.4\%$), and 178 ($1.4\%$), respectively (Supplementary Files S5–S7). Out of these, reads mapping to heavy strand composed $71.5\%$, $67.4\%$, and $43.5\%$ of the total mitochondria-derived sequences respectively, while the remaining reads mapped to the light strand (Supplementary File S8, Figures S1–S3).
## 2.3. Length Distribution and Annotation of mt-ncRNAs
We found a diverse size range (Supplementary File S8, Figure S4) and gene origins (Supplementary File S8, Figure S5) of mitochondrial non-coding RNAs in the datasets examined. Datasets-1 and -2 were enriched in mt-sRNAs in the size range of 31–32 nt and 27 nt, respectively, while the mt-lncRNAs in the dataset-3 were in the size range of 87 to 141 nt. Most of mt-lncRNAs in the dataset-3 had length variants (Supplementary File S7). The majority of them belonged to three genes, namely, ATP6, ATP8, and CytB (Supplementary File S8, Figure S5C).
## 2.4. Differential Expression of mt-ncRNAs
There were differences in number of reads mapping to mitochondrial genes between the subject and control groups in both the dataset-1 and dataset-2 (Supplementary File S8, Figure S5). PCA for mt-sRNAs (Supplementary File S8, Figures S6 and S7) and the heatmap of top 50 highly variable read sequences (Figure 1) showed clustering of two different groups consistent with the subject and controls, although there was a small variability within groups resulting from biological replicates.
Differential expression (DE) analysis of mt-ncRNAs was performed on the data from dataset-1 (chronic hepatitis C-associated cancer vs. non-cancer liver samples; chronic hepatitis B-associated cancer vs. non-cancer liver samples; chronic hepatitis C-associated cancer vs. uninfected cancer liver tissue samples; and chronic hepatitis B-associated cancer vs. uninfected cancer liver tissue samples, Table 1) and dataset-2 (semen from obese vs. lean subjects). In the dataset-1, there was a significant reduction ($p \leq 0.005$) in the relative abundance of tRNA half (tRH) mapping to tRNA genes of nuclear genome origin, namely, tRFs from tRNAGly and tRNAVal in cancer tissue when compared to non-cancer liver tissue [18]. We observed a similar trend for DE mitochondrial tRHs. For example, when looking to chronic hepatitis C-associated cancer vs. non-cancer liver tissue samples comparison, 13 out of 354 DE tRFs were tRHs and 10 of them were significantly downregulated in the cancer cells (Supplementary File S9). Five of these ten mitochondrial tRHs originated from tRNAVal. In the dataset-2, 75 DE mt-sRNAs (39 up- and 36 down-regulated in semen samples from obese vs. lean individuals) were identified, all of them originating from the mitochondrial large subunit rRNA (Supplementary File S10). The majority of them existed as length variants and all of them clustered at a region with sequence start site between 2690 and 2706 in the mitochondrial large subunit (mtLSU) rRNA gene, with 2704 and 2705 being the two most common sequence start sites.
## 2.5. Novel Mitochondrial tRFs and Non-Coding RNAs Detected by mtR_find
The DE mt-tRFs (783 unique mt-tRFs) from the dataset-1 were compared with tRFs downloaded from MINTbase, an extensive database of 28,824 nuclear and mitochondrial tRFs obtained from 12,023 cancer datasets using MINTmap tool [19]. There were 365 ($46.6\%$) tRFs not found in MINTbase, including 214 tRFs-5, 42 tRFs-3, 43 i-tRFs-3, 56 i-tRFs-5, 8 tRNA-half-5, and 2 tRNA-half-3 (Supplementary File S11). All these novel tRFs had normalized reads per million (RPM) value greater than one (Supplementary File S11), a cut-off value in MINTbase.
## 2.6. Performance of the Tool with Simulated Data Set
There were 16 simulated sequences of mt-lncRNA, including 7 from the heavy strand, 5 from the light strand, and 4 antisense to heavy strand genes with substitutions and grouped as light strand transcripts. The simulation gave results concordant with the mtR_find (Supplementary File S8, Table S1). The CSV files from both the simulation and mtR_find analyses were loaded as data frames using PYTHON pandas module, element-wise comparison was performed between the two data frames, and the results were similar (Supplementary File S12).
## 3. Discussion
mtR_find is the first small RNA tool to incorporate parallel processing by reading multiple input files simultaneously and processing them at the same time. The mtR_find tool performs much better when compared to published small RNA tools such as MINTmap [15]. Results from testing mtR_find on the simulated dataset shows that the sensitivity of mtR_find is high. The read count algorithm of mtR_find can be used for developing tools for the analysis of other sRNA types by replacing the reference and modifying the annotation criteria. Even though the parallel processing significantly reduces the execution time, it has to be noted that the execution time is CPU-dependent. Furthermore, if the number of CPU is not commensurate with the available RAM, the script might run into memory errors. In such a case, a user has to lower the CPU count manually by using the command line parameters to circumvent the issue. The execution time of mtR_find is much lower than MINTmap and also includes the time to download both the GTF file and the mitochondrial genome. If these files are provided manually as input files, then the execution time will be further reduced. Moreover, mtR_find identified 365 tRFs that are not present in MINTbase v2.0. Due to the presence of overlapping reading frames in several mitochondrial genes, mt-sRNA sequence start and end sites of ±3 were used for annotating the mt-sRNAs in our tool; indeed, 266 out of the 365 sequences had sequence start site or end site at ±3 nt from the gene start or end boundary, respectively (Supplementary File S11). And, 42 out of these 266 mt-sRNAs, had sequence start or end site either before or after the 5′ and 3′ end of tRNA gene boundary, respectively. Hence, mtR_find is highly sensitive in capturing all mtsRNAs from the mitochondrial genome.
mtR_find identified features in the test datasets that had not been identified before. mtR_find identified reads mapping to the light strand in the range of 28.5–$56.5\%$. This result is discrepant with the previous studies on mt-sRNAs, where it has been shown that the number of reads from the light strand constituted approximately 3–$5\%$ of all the mitochondrial reads [4,12]. Notably, we found a considerable number of reads mapping to the light strand in an anti-sense orientation to the heavy strand genes. Small RNAs derived from a nuclear genome are classified based on their biogenesis pathways, and the length of small RNAs acts as a proxy indicator for biogenesis. For example, tRNA half (tRH), miRNAs, and piRNAs are typically 32–34 nt, 21–22 nt, and 26–31 nt in length, respectively, in most studied species [20]. A quick review of the findings from the original studies (dastasets-1 and -2; [18,21]) revealed that these datasets were enriched in tRHs and piRNAs of nuclear genome origin, respectively. Interestingly, we found that a majority of mt-sRNAs in the dataset-1 were tRH of 31–32 nt length, and this frequency of mitochondrial tRH was strikingly similar to that of nuclear tRH [18], suggesting a similar biogenesis pathway. In the case of dataset-2, majority of mt-sRNAs of 27 nt size mapped to mt-rRNA. Although the size range is indicative of piRNA biogenesis, there is only a single study showing the localization of PIWI proteins as well as piRNAs mapping uniquely to the mitochondrial genome [22]. We found the sequence start sites of these putative 29 mitochondrial piRNAs [22] either exactly overlapped or were in the proximity of ±3 nt of sequence start sites of 27-nt mt-sRNAs from the dataset-2. However, it is not known whether these mt-sRNAs are processed through a particular biogenesis pathway with a defined biological function. Except for tRFs, no curated database exists for mitochondria-derived sRNAs or ncRNAs. Therefore, all the remaining differentially expressed mt-RNAs from datasets 1 and 2, have been not catalogued before. In case of mt-lncRNAs in dataset-3, the majority of sRNAs were derived from ATP6, ATP8, and CytB. lncCytB is among the most abundant mitochondrial lncRNAs in HeLa cells [23] and its abnormal trafficking has been demonstrated in human hepatocellular carcinoma cells [24]. To our knowledge, other mt-lncRNAs found in mouse oocytes and 1-cell embryos (dataset-3) have no functional annotations yet.
mtsRNAs identified in datset-1 and daaset-2 might have biological implications. The abundance of tRH of nuclear genome origin is positively correlated (Spearman’s rho = 0.67–0.87) with angiogenin mRNA/protein abundance in non-cancer liver tissue [18]. Differences in the expression of nuclear genome-derived tRFs produced through enzymatic cleavage of angiogenin have been observed [25]. These nuclear genome-derived tRFs bind to cytochrome C (a protein complex partially encoded by the mitochondrial genome) to prevent cells from undergoing apoptosis [25] and it has also been showed that these tRFs improve cell survival by acting in response to stress [26,27]. Although it is unknown whether tRFs of mitochondrial origin act in a similar way, differences in the expression of mitochondrial non-coding RNAs have been associated with cancer [8,28,29]. Moreover, it has been shown that the processing of the mitochondrial tRNAs at both the 5′ and 3′ ends has a substantial effect on mitochondrial gene expression [30,31]. Since mitochondrial tRFs are generated from both the 5′ and 3′ end of the mitochondrial tRNAs, and aberrant expression of mitochondrial genes leads to many disease conditions including cancer, DE mitochondrial tRFs in dataset-1 could potentially be implicated to disease condition. In dataset-2, the authors have indicated that differences in expression of piRNAs between spermatozoa from lean and obese men may increase the chances of offspring to develop obesity. No studies investigating the expression of mt-sRNAs in obesity are available; however, it has been shown that mitochondrial peptides are involved in regulating metabolism [32]. The expression of mitochondrial peptides is hypothesized to be controlled by mt-sRNAs [4]. Hence, altered expression of mt-sRNAs may result in an impaired metabolic pathway, which, in turn, might result in obesity.
Interestingly, no single mt-sRNA mapped to the termination association sequence (TAS) in the mitochondrial DNA control region, neither in the dataset-1 nor in the dataset-2. Small RNAs originating from the TAS region (co-ordinates 16,161 to 16,188 in the mouse mtDNA sequence) within the mitochondrial control region were expressed in mice [33].
Studies on tRFs have shown that a disproportionately high number of unique tRFs was derived from mitochondrial tRNA genes ($$n = 22$$) when compared to nuclear tRNA genes ($$n = 625$$) in humans [34,35]. For example, a study on samples from prostate cancer patients demonstrated that $62.0\%$ tRFs originated from nuclear tRNA genes, while the remaining $38\%$ originated from the mitochondrial tRNA genes [35]. This indicates the diversity of mitochondrial tRFs. Many of these mt-sRNAs map uniquely to the mitochondrial genome and not to the mitochondrial DNA-like sequences (NUMTs) in the nuclear genome [36]. Moreover, it has been shown that expression of mt-sRNAs is not associated with levels of NUMT but varies across different tissues depending on the mitochondrial DNA content [36]. This indicates mt-sRNAs have biological roles and, hence, mt-sRNAs were found to be differentially expressed in dataset-1 and 2 could be implicated in disease condition.
## 4.1. Implementation
The code for mtR_find is written in PYTHON 3.6.8 (also compatible with PYTHON 2.7.5) and requires dependencies that include PYTHON modules: pandas (version 0.21.0 and above) [37], multiprocessing, matplotlib [38] (optional) and other tools such as bowtie (version 1.1.2 and above) [39] and samtools (version 1.9 and above) [40].
## 4.2. Data Resources, Extraction of Mitochondrial Genome, and Annotation File
Depending on the species of interest (input parameter), mitochondrial genomes of Homo sapiens, Danio rerio, Gallus gallus, Mus musculus, and *Rattus norvegicus* have been downloaded from Ensembl [41]. In the case of *Xenopus laevis* and Xenopus tropicalis, the mitochondrial genomes have been downloaded from Xenbase [42]. A bowtie index corresponding to the particular genome was created using default parameters. *The* gene annotations were obtained by downloading the gene transfer format (GTF) annotation file for the species of interest from Ensembl/Xenbase and extracting the information pertinent to the mitochondrial genes. For any other species not listed above, the FASTA and GTF files have to be downloaded and provided manually by the user. The script mt_annotaion.py is useful to pre-process the GTF file (https://github.com/asan-nasa/mtR_find/blob/master/add-on/mt_annotation.py, accessed on 26 August 2022).
## 4.3. ncRNA Count Generation
In the ncRNA-count generation step, a dictionary of unique sequences was created from the list of all input FASTQ files. Using this as a reference, the count number for each unique sequence was determined for individual FASTQ files. The default cut-off threshold value for sequences is <200, because the counting accuracy of low ncRNA-count sequences can be erratic [5,43]. However, users can specify their own cut-off value tailored for the specific needs of their analyses. The output read count file is in comma separated value (CSV) format, in which the row names are unique sequences and column names are file names. Individual rows display the count number of a particular sequence in the corresponding library. In the case of SOLiD sequencing data, reads have to be mapped to the corresponding genome and converted from color-space to FASTQ files using adapt_find script [44], available at https://github.com/asan-nasa/adapt_find/blob/master/adapt_find.py (accessed on 26 August 2022) prior to the read-count generation step.
## 4.4. Mapping
Unique sequences from the read count file were extracted, converted to FASTA format, and mapped against the mitochondrial bowtie index using the following parameters: bowtie --best –v 1 –p 20. The mapped and unmapped sequences from the resulting SAM file were filtered out using samtools. Unmapped sequences carrying a non-templated CCA motif at their 3′ ends were retrieved, the CCA motif was trimmed, and the sequences were again mapped to the mitochondrial genome, this time under zero-mismatch stringent criterion to avoid false positive findings. The sequences mapping to the 3′ end of mitochondrial tRNA genes in the sense direction or to the 5′ end in the anti-sense direction were annotated as having a non-templated CCA additions at their 3′ ends (Figure 2).
## 4.5. Annotation
Genomic locations of mapped sequences were determined (Figure 3). Then, the gene annotation was performed using individual mitochondrial genes (Supplementary File S8, Table S2). The final sequence annotation was based on the position of a mapped sequence and its length within a gene using the MINTbase criteria [19] with some modifications (Supplementary File S8, Table S3). For both mt-sRNA and mt-lncRNA, if the sequence start site is in one gene and the end site is in another gene (Figure 3D), the gene that has the sequence start site is taken for annotation. The only exception to this rule is tRF-1. MINTbase classification of mt-sRNAs includes tRH-5′ and tRH-3′, and tRNA derived fragments (tRFs) include tRF-5′, tRF-3′, tRF-1, and i-tRF.
## 4.6. Nomenclature
Two levels of ID were produced. The specificID provides a unique annotation for every possible isoform of a sequence. *The* general ID provides the annotation of the family the given sequence belongs, in the terms of typical starting nucleotide, and skipping information on the sequence length and modifications from the main form. The nomenclature format for mt-sRNA is: “species_name”|”mt-sRNA”|”gene”|”sequence subtype”|”Strand”|”Orientation”|”Sequence start position”|”Sequence length”|Substitutions. For mt-lncRNA, the format is “species name”|”mt-lncRNA”|“gene”|”strand”|”sequence start position”|”sequence length”.
The species abbreviation is a three- or four-letter organism code as proposed in Kyoto Encyclopedia of Genes and Genomes (www.genome.jp/kegg/catalog/org_list.html (accessed on 19 February 2023)). The species abbreviations used in the present study are given in Supplementary File S8, Table S4. Gene name refers to one of the mitochondrial genes (Supplementary File S8, Table S2). If the sequence falls in a non-coding region, then it is denoted as “non-coding (“nc”) (Figure 3). The sequence subtype refers to the specific location in a gene transcript (applicable only for mt-sRNAs), as defined in Supplementary File S8, Table S3. Sequence start position refers to the genomic position of the 5′ nucleotide of the sequence. Strand refers to either heavy or light strand. Antisense orientation indicates anti-sense mapping of the sequence to a particular gene. Substitutions refer to any mismatches in the sequence as compared to the reference genome; if they occur, nucleotide position (from the start of the sequence) is given, along with the base letter to which the main form has been altered. The example nomenclature is given in Table 2.
## 4.7. Training-Experimental Dataset
We tested the tool on two small RNA (sRNA) datasets [18,21] downloaded from NCBI, and one long non-coding RNA dataset (unpublished study [45]) downloaded from European Nucleotide Archive (ENA). MINTmap was also tested on the two sRNA datasets to compare the performance of mtR_find with that of MINTmap. The two sRNA datasets were generated in studies where mt-ncRNAs were not analyzed. The dataset-1 contained information from sRNA-seq of hepatocellular carcinoma (HCC) versus non-malignant liver samples from subjects with chronic hepatitis B or C ($$n = 4$$ for each group), as well as uninfected subjects undergoing resection of metastatic tumors control group ($$n = 4$$, Supplementary File S13). In the dataset-2, the information was obtained from sRNA-seq of semen samples from 23 human subjects, classified as either lean ($$n = 13$$) or obese ($$n = 10$$; Supplementary File S13). The dataset-3 has been generated from RNA-seq of mouse oocytes ($$n = 2$$) and 1-cell embryos (Supplementary File S13).
In the case of sRNA datasets, the SRA files were downloaded using prefetch SRA utility tool. The SRA file format was converted to FASTQ files using fastq-dump tool [46]. Adapter sequences were removed from the raw FASTQ files, bases with quality score less than 20 were trimmed from the 3′ end. Sequences shorter than 15 nt were removed. The read count of mt-sRNA sequences was extracted by running mtR_find and differential expression analysis was performed using DESeq2 R package [47]. mt-sRNA sequences with a Benjamini–Hochberg adjusted p-value of <0.1 were considered differentially expressed (subject versus control). For mt-lncRNAs, paired-end FASTQ files obtained from ENA were converted to single-read FASTQ files using FLASH [48] and then run on the mtR_find tool. Due to the lack of biological replicates in the dataset-3, only the relative abundance of read counts was reported in our analysis.
## 4.8. Training-Simulated Dataset
mtR_find was tested on simulated datasets for both mt-sRNA and mt-lncRNA using separate scripts with the following command line parameters: [1] FASTA file (in this case, zebrafish mitochondrial genome); [2] GTF file (zebrafish mitochondrial gene annotation information); [3] desired number of unique sequences in each stimulated file; and [4] total number of stimulated files to be created. The GTF file was read and separated into two lists. The first list was based on the strand specificity: heavy strand or light strand, while the second one was based on genes.
The simulation script picked a random sequence start position from a random gene or from the non-coding region, in either the heavy or the light strands. Then, a random length was selected and added to the sequence start position to compute the sequence end position. Using the sequence start- and end-positions as co-ordinates, the sequence was extracted from the input mitochondrial genome. For the light strand sequences, the reverse compliment of the forward strand sequence was extracted, and a random count number for this particular sequence was assigned for each simulated file. This information was then used to create a simulated FASTQ file using the sequence and count information for each sequence. Random simulation of sequences and the corresponding read counts was performed using PYTHON module “random”. The simulation script outputs a simulated read count CSV file with sequence and annotation information, which should match the output of the mtR_find when the simulated FASTQ files are being analyzed.
Simulation scripts used different strategies to distribute reads among different sequences as described in Supplementary File S8, and Tables S2, S4 and S5. However, in both methods the total number of reads was split in such a way that 80–$95\%$ were simulated from the heavy strand and the remaining 5–$20\%$ were from the light strand. The simulated dataset has been tested using mtR_find tool, and the results were compared with the results from the simulation. The four different parameters were calculated to check the concordance: [1] number of unique sequences; [2] sequences mapping to the mitochondrial genome and the distribution of sequences between the two strands; [3] total read count and count of individual sequences in each file; and [4] annotation information and read count distribution among four bio-types. The bio-types included rRNA, tRNA, non–coding region, and protein-coding genes.
Simulation and testing of the tool were performed on a Linux server (Red Hat 4.8.5–28) with Python 3.6.8 (64 CPU cores, 504 GB RAM).
## 4.9. Identification of Novel tRFs
tRFs were downloaded from MINTbase [19] as a tab delimited file, while the mitochondrial tRFs (test sequences), obtained from mtR_find, were in CSV format. Both files were loaded as separate pandas data frames and the sequence column was extracted into two separate lists. Then, the sequences from the two lists were compared (Supplementary File S14). Only exact sequence matches were allowed.
## 5. Conclusions
Existing tools can identify only a sub-group of mtsRNAs. mtR_find is the first publicly available tool to comprehensively analyze and annotate all mitochondrial non-coding RNAs. The novel read count algorithm significantly reduces the execution time, making a high-throughput analysis of multiple datasets possible. mtR_find does not create any intermediate files and, hence, saves disk space. Moreover, mtR_find generates a single script for pre-processing data, mapping reads, and then generating count data with annotation information for files. mtR_find identifies novel mt-sRNAs, such as tRFs or mt-lncRNAs, in the existing datasets. It opens a new analytical possibility to re-examine thematic RNA-seq clusters of datasets in search for novel diagnostics markers. |
# Increased Prolonged Sitting in Patients with Rheumatoid Arthritis during the COVID-19 Pandemic: A Within-Subjects, Accelerometer-Based Study
## Abstract
Background: Social distancing measures designed to contain the COVID-19 pandemic can restrict physical activity, a particular concern for high-risk patient groups. We assessed rheumatoid arthritis patients’ physical activity and sedentary behavior level, pain, fatigue, and health-related quality of life prior to and during the social distancing measures implemented in Sao Paulo, Brazil. Methods: Post-menopausal females diagnosed with rheumatoid arthritis were assessed before (from March 2018 to March 2020) and during (from 24 May to 7 July 2020) social distancing measures to contain COVID-19 pandemic, using a within-subjects, repeated-measure design. Physical activity and sedentary behavior were assessed using accelerometry (ActivPAL micro). Pain, fatigue, and health-related quality of life were assessed by questionnaires. Results: Mean age was 60.9 years and BMI was 29.5 Kg/m2. Disease activity ranged from remission to moderate activity. During social distancing, there were reductions in light-intensity activity ($13.0\%$ [−0.2 h/day, $95\%$ CI: −0.4 to −0.04; $$p \leq 0.016$$]) and moderate-to-vigorous physical activity ($38.8\%$ [−4.5 min/day, $95\%$ CI: −8.1 to −0.9; $$p \leq 0.015$$]), but not in standing time and sedentary time. However, time spent in prolonged bouts of sitting ≥30 min increased by $34\%$ (1.0 h/day, $95\%$ CI: 0.3 to 1.7; $$p \leq 0.006$$) and ≥60 min increased by $85\%$ (1.0 h/day, $95\%$ CI: 0.5 to 1.6). There were no changes in pain, fatigue, and health-related quality of life (all $p \leq 0.050$). Conclusions: Imposed social distancing measures to contain the COVID-19 outbreak were associated with decreased physical activity and increased prolonged sedentary behavior, but did not change clinical symptoms sitting among patients with rheumatoid arthritis.
## 1. Introduction
A preliminary, multinational survey reporting step counts provided by smartphones showed that social distancing measures to contain the spread of SARS-CoV-2 have induced physical inactivity (i.e., not meeting the physical activity guidelines) [1]. The onset of the coronavirus disease 2019 (COVID-19) pandemic has placed further spotlight on participation in sedentary behavior (i.e., time spent in a sitting or reclining posture with a low energy expenditure [≤1.5 METs]), with reported increases in daily sitting time from pre-pandemic levels ranging from 30 min up to 3 h in different populations [2,3].
Extensive epidemiological evidence has indicated that physical inactivity is a major risk factor for early mortality and chronic diseases, including obesity, type 2 diabetes, cardiovascular diseases, metabolic syndrome, certain type of cancers, and others [4]. Even though time spent in moderate-to-vigorous intensity physical activity has the strongest detrimental associations with health outcomes [5,6,7], similar (albeit, detrimental) relationships have been broadly observed for excessive time in sedentary behaviors [7,8,9,10,11,12,13,14,15,16]. Importantly, both total sitting time and prolonged, uninterrupted sitting time are associated with increased risk of all-cause mortality even after consideration of the influence of participation in moderate-to-vigorous intensity physical activity [7,8,17]. Moreover, the deleterious associations of sedentary behavior with cardiometabolic risk and all-cause mortality are most pronounced in those who are physically inactive [6,11,18,19,20].
Rheumatoid arthritis is a rheumatic autoimmune disease characterized by chronic inflammation, pain, and physical disability [21]. Clinical disease symptoms can include joint pain, swelling, stiffness, and deformity, fatigue, muscle weakness, and reduced physical functioning [22,23]. Patients with rheumatoid arthritis have a higher risk of morbidity and mortality from cardiovascular diseases [24]. This increased risk can be at least partially explained by the complex interplay between chronic inflammation, adverse effects of drugs, associated comorbidities (e.g., dyslipidemias, insulin resistance, hypertension), and lifestyle [25,26]. Despite physical activity being advocated as an integral part of disease standard care [27], physical inactivity and sedentary behavior are highly prevalent among patients with rheumatoid arthritis [28].
Physical inactivity and sedentary behavior are modifiable risk factors considered to be potential targets to prevent morbimortality in autoimmune rheumatic diseases [28,29]. Among patients with rheumatoid arthritis, sedentary behavior is associated with higher disease scores, increased pain, fatigue [30] and number of comorbidities, reduced aerobic capacity [31] and physical function [30], and poor self-efficacy [32]. Furthermore, physically inactive patients with rheumatoid arthritis exhibit higher cardiovascular risk factors (e.g., higher systolic blood pressure and homeostasis model assessment (HOMA) index, abnormal lipid profile) when compared to their physically active counterparts.
Patients with rheumatoid arthritis have been shown to be more susceptible to COVID-19 infection [33] and, therefore, may be subjected to more restrictive measures of social distancing, potentially with significant impacts on their activity options, and, hence, on their burden of cardiovascular disease risk, the main cause of mortality in this population [26].
In this prospective study using a within-subjects design, we assessed physical activity and sedentary behavior levels using accelerometers in patients with rheumatoid arthritis prior to and during the imposed measures of social distancing to combat COVID-19 in Sao Paulo, Brazil. Additionally, we have assessed whether potential changes in physical activity and sedentary behavior levels would be associated with changes in pain, fatigue, and health-related quality of life.
## 2.1. Participants
Sixty-four patients diagnosed with rheumatoid arthritis were recruited from the Outpatient Rheumatoid Arthritis Clinic of the Clinical Hospital (School of Medicine, University of Sao Paulo) between March 2018 and March 2020 to participate in a randomized controlled trial (clinicaltrials.gov: NCT03186924). Thirty-five out of 64 patients with rheumatoid arthritis accepted to participate in this ancillary study.
Post-menopausal female patients diagnosed with rheumatoid arthritis, according to American College of Rheumatology European League against rheumatism collaborative initiative revised criteria [34], were recruited directly from the Rheumatoid Arthritis Outpatient Clinic of the Rheumatology Division. The exclusion criteria included: [1] participation in structured exercise training programs within the last 12 months; [2] unstable drug therapy in the last 3 months prior to and during the study; [3] Health Assessment Questionnaire score >2.0 (i.e., severe to very severe physical impairment).
This trial was approved by the local ethical committee (Commission for Analysis of Research Projects, CAPPesq; protocol code: 58340316.0.0000.0068; approval number: 1.735.096). Patients signed an informed consent form before participation in the study.
## 2.2. Experimental Design
All patients with rheumatoid arthritis had been through a clinical and physical activity assessment before the official set of social distancing measures to contain the COVID-19 outbreak, adopted on the 24 of March 2020. This facilitated the unique opportunity to track physical activity levels during the pandemic in a within-subjects, repeated measure design. We then obtained a new approval from the ethics committee for collecting data during the social distancing. Three members of our staff (DR, SMS, KM) delivered the accelerometers (ActivPAL micro™, PAL Technology, Glasgow, UK) and questionnaires to the patients at home from the 24 May to 7 July. The time elapsed for data collection between baseline and during social distancing was 12.5 months (9.9, 15.2). Patients were asked if they had adhered to the social distancing measures. All but two responded affirmatively. Data were assessed with and without the two non-compliers, and results remained the same. Thus, we reported the full data in this manuscript.
## 2.3. Physical Activity Level
Physical activity level was measured using activPAL micro™ (PAL Technology, Glasgow, UK) activity-based accelerometers before and during social distancing. Patients wore the accelerometer for 7 consecutive days (24 h/day), which was fitted using tape (3M, Tegaderm®, adhesive tape) on the right medial front thigh, orientated with the x-axis pointing downward, y-axis horizontally to the left and z-axis horizontally forward. Data were exported and analyzed using ActivPAL3™ software, version 8.10.9.46 (PAL Technology, UK). Data was checked by an experienced researcher and also crosschecked with a sleep diary. All data were standardized to a 16-h day in order to avoid bias from differences in patients’ wear time, by the formula: (data × 16)/wear time. Data were reported as follows: time spent sitting and lying (h/day), in prolonged sitting (h/day), standing (h/day), stepping (h/day), time spent in light-intensity physical activity (step cadency <100 steps/min [35]), time spent in moderate-to-vigorous intensity physical activity (step cadence of ≥100 steps/min [35]), and number of sit to stand (i.e., breaks) in time spent in sedentary behavior.
## 2.4. Clinical Assessment
Clinical characteristics were assessed at baseline, before the set of social distancing, Disease activity was assessed by the Disease Activity Score in 28 joints (DAS28 PCR) [36] and Clinical Disease Activity Index (CDAI) [37], in which higher scores represent more severe disease activity. The Health Assessment Questionnaire (HAQ) [38], which evaluates physical functioning in eight domains of daily life, was also used; higher scores represent greater physical disability. Disease duration, presence of comorbidities (e.g., hypertension, dyslipidemia, type 2 diabetes, depression, and other rheumatic diseases), current dose of prednisone, current use of biological agents (e.g., anti-TNF, anti-IL6, anti-IL1, B-cell depleting agents, and T-cell activation inhibiters), non-biological disease-modifying anti-rheumatic drugs (e.g., methotrexate and leflunomide), and other medications (i.e., anti-inflammatory drugs, pain killers, antihypertensive drugs, antihyperlipidemic drugs, antidiabetic drugs, and anti-depressants) were obtained by reviewing medical records and interviewing patients with rheumatoid arthritis. Blood samples (~10 mL) were collected after a 12-h overnight fast for measuring the following parameters: C-reactive protein and erythrocyte sedimentation rate. Samples were collected in vacutainer tubes and subsequently analyzed at the Clinical Hospital Central Laboratory (School of Medicine, University of Sao Paulo). C-reactive protein was determined by immunoturbidimetry. Erythrocyte sedimentation rate was assessed using an automated analyzer.
Pain, fatigue, and health-related quality of life were assessed before and during social distancing. Pain was assessed by the Visual Analogic Scale [39], in which patients graded their pain using a 10-point scale; zero means no pain and 10 means severe or unbearable pain. Fatigue was assessed by the Fatigue Severity Scale [40], in which scores range from 9 to 63; lower scores indicate lower fatigue. Physical and mental health-related quality of life were assessed by the 36-Item Short Form Survey (SF-36) questionnaire [41], in which scales (physical health: physical function, role-physical, bodily pain, and general health; mental health: vitality, social function, and role-emotional) range from 0 to 100; higher scores indicate better quality of life.
## 2.5. Statistical Analysis
Dependent variables were tested using repeated measures mixed models, with time (Before social distancing versus During social distancing) as fixed factor and participants as random factor, with a compound symmetry covariance matrix. Delta changes in all dependent variables were calculated with the following formula: delta change = data during social distancing—data before social distancing. Associations between changes in physical activity and sedentary behavior level and changes in pain, fatigue, and health-related quality of life were tested using Pearson correlation tests.
Statistical analysis was performed in SAS 9.3 (SAS Institute Inc., Cary, NC, USA). Data are presented as mean, estimated mean difference from the repeated measures mixed models, and $95\%$ confidence intervals ($95\%$ CI). The significance level was set at p ≤ 0.05.
## 3. Results
Patients’ clinical characteristics are presented in Table 1. In summary, mean age was 60.9 years ($95\%$ CI: 58.0 to 63.7) and BMI was 29.5 Kg/m2 ($95\%$ CI: 27.2 to 31.9). Disease activity ranged from remission to moderate activity, as assessed by DAS28 PCR and CDAI. Disability assessed by HAQ ranged from mild to severe. Mean disease duration was 18.5 years ($95\%$ CI: 14.7, 22.3). Mean C-reactive protein was 10.8 mg/dL ($95\%$ CI: 5.5 to 16.2) and erythrocyte sedimentation rate was 28.4 mm/H ($95\%$ CI: 15.7 to 41.1). Most of the patients were using disease-modifying anti-rheumatic drugs and prednisone ($85.7\%$ and $74.3\%$, respectively). Hypertension, dyslipidemia and type 2 diabetes were the most frequent comorbidities ($51.4\%$, $48.6\%$ and $34.3\%$, respectively). Before social distancing, mean pain was 5.0 ($95\%$ CI: 4.1 to 6.0), fatigue was 39.3 ($95\%$ CI: 33.8 to 44.8), and physical and mental health-related quality of life were 39.8 ($95\%$ CI: 33.1 to 46.5) and 62.0 ($95\%$CI: 52.3 to 71.7), respectively.
During social distancing, there were reductions in total stepping time ($15.7\%$ [−0.3 h/day, $95\%$ CI: −0.4 to −0.1; $$p \leq 0.004$$]), in light-intensity physical activity ($13.0\%$ [−0.2 h/day, $95\%$ CI: −0.4 to −0.04; $$p \leq 0.016$$]) and in moderate-to-vigorous physical activity ($38.8\%$ [−4.5 min/day, $95\%$ CI: −8.1 to −0.9; $$p \leq 0.015$$]), but no changes in total standing time (−0.1 h/day, $95\%$ CI: −0.7 to 0.5; $$p \leq 0.767$$) or total sedentary time (0.3 h/day, $95\%$ CI: −0.4 to 1.0; $$p \leq 0.335$$) in patients with rheumatoid arthritis. However, time spent in prolonged bouts of sitting ≥ 30 min increased by $34\%$ (1.0 h/day, $95\%$ CI: 0.3 to 1.7; $$p \leq 0.006$$; Figure 1A) and sitting bouts ≥60 min increased by $85\%$ (1.0 h/day, $95\%$ CI: 0.5 to 1.6; $p \leq 0.001$; Figure 1B). Sit-stand transitions were reduced by $10\%$ (−5.1/day, $95\%$ CI: −10.3 to 0.0; $$p \leq 0.051$$). Figure 1C and Figure 1D illustrate the accelerometer data from a patient who experienced decreased activity and increased prolonged sitting after social distancing.
During social distancing, there were no changes in pain (0.31 [$95\%$ CI: −1.04 to 1.67; $$p \leq 0.652$$), fatigue (−2.3 [$95\%$ CI: −10.0 to 5.4]; $$p \leq 0.550$$), and physical and mental health-related quality of life (1.2 [$95\%$ CI: −8.2 to 10.7]; $$p \leq 0.796$$ and −9.3 [$95\%$ CI: −23.0 to 4.5], $$p \leq 0.183$$, respectively) in patients with rheumatoid arthritis.
Changes in physical activity and sedentary behavior levels were not associated with changes in pain, fatigue, and physical and mental health-related quality of life during social distancing (all $p \leq 0.050$).
## 4. Discussion
To our knowledge, this is the first study to track physical activity and sedentary behavior patterns before and during the COVID-19 pandemic using validated accelerometry and a within-subjects design. Our main findings suggest that social distancing (including stay-at-home order) can lead to reduced ambulatory activities and increased physical inactivity as well as increased prolonged sitting among patients with rheumatoid arthritis. In contrast, social distancing was not associated with worsened pain, fatigue, and physical and mental health-related quality of life. Physical inactivity along with too much sitting emerge as a risk factor that could be detrimental to cardiometabolic health in such a high-risk group of patients during and possibly after the COVID-19 pandemic.
As those confined at home are less prone to perform physical activity, it has been speculated that inactivity and sedentary behavior could peak during the COVID-19 pandemic [29]. In fact, a rapid review has shown a substantial decrease in physical activity with a concomitant increase in sedentary behavior across all age groups during COVID-19 lockdown [42]. As for the Brazilian population, a national retrospective survey comprising 39,693 adults and older adults has shown a significant increase on self-reported physical inactivity and screen-based sedentary behaviors during the COVID-19 pandemic [43,44], which corroborates the objectively measured data presented herein. Such an increase in inactivity and sedentary behavior is of particular concern for those who are usually hypoactive and show higher risk of cardiovascular diseases, this being the case of patients with rheumatoid arthritis (see the patients’ comorbidities in Table 1) [26,28].
Observational and experimental evidence demonstrates that inactivity can predispose to pathological states and poor outcomes [45]. Sedentary behavior can add to the adverse impacts of physical inactivity in impairing cardiovascular health [46]. Consequently, individuals who are both physically inactive and highly sedentary are at the highest risk for poor outcomes [6,20], which might be the case for patients with autoimmune rheumatic diseases, as they commonly spent most of their daily hours engaged in sedentary behavior and did not achieve minimum levels of moderate-to-vigorous physical activity [28]. Namely in rheumatoid arthritis, the estimates of physical inactivity and sedentary behavior are comparable to those of other chronic diseases (e.g., type 2 diabetes and cardiovascular diseases), groups in which both physical inactivity and sedentary behaviors are associated with poor disease prognosis and mortality [9,10,11,13,47], as well as poor health-related outcomes (i.e., higher disease activity score, disease symptoms and number of comorbidities, and lower physical capacity and functioning) [28]. In rheumatoid arthritis, regular participation in exercise improves disease symptoms, inflammatory markers, cardiometabolic risk factors, and physical capacity [48,49]. However, regular participation in moderate-to-vigorous physical activity may not be feasible for some patients, especially those with poor mobility or during disease flares.
Interestingly, we observed that even in the absence of changes in total sedentary time, prolonged sitting time rose considerably. Prolonged, uninterrupted bouts of sedentary behavior are associated with all-cause mortality [8], whereas well-controlled studies show that very-light to light-intensity active interruptions in prolonged sedentary time (e.g., 2 min of walking for every 30 min of sitting) can elicit immediate improvements in cardiometabolic risk factors [50]. Recent evidence has shown that light-intensity physical activity is associated with lower disability, disease activity and cardiovascular risk in rheumatoid arthritis, in contrast to excessive sitting [28,51]. Additionally, a crossover randomized trial demonstrated that performing 3-min bouts of light-intensity walking every 30 min of sitting (total: 42 min) resulted in improved glycemic (i.e., glucose, insulin, and c-peptide) and inflammatory (i.e., IL-1β, IL-1ra, IL-10, and TNF-α) markers when compared to 8 h of prolonged, uninterrupted sitting in postmenopausal females with rheumatoid arthritis [52]. This raises the need for widespread recommendation of breaking-up prolonged sitting whenever possible (e.g., 3 min breaks of light-intensity walking every 30 min of sitting) to avoid poor health outcomes during the pandemic, which tend to be more restrictive for high-risk groups for COVID-19, such as those with autoimmune rheumatic diseases [33], a condition associated with lower vaccine responses, which may enforce more vulnerable patients to maintain some degree of physical distance and home isolation for as long as the pandemic endures.
Our findings suggested social distancing did not affect pain, fatigue, and physical and mental health-related quality of life. Qualitative evidence in patients with rheumatoid arthritis demonstrate that patients reported no changes in physical health outcomes. Conversely, they noted social distancing resulted in worsened mental health-related symptoms [53]. Additionally, changes in these variables did not associate with changes in physical activity and sedentary behavior. Because this study was performed 2 to 4 months after the set of social distancing measures, we cannot rule out that such a short period of exposure did not allow detecting impairments in these outcomes. Alternatively, it is possible that patients with rheumatoid arthritis may be more resilient than general population to the detrimental impacts of the pandemic on overall health.
The main strengths of this study are its within-subjects design and the use of posture-based accelerometers, which enables an objective and a comprehensive assessment of sedentary behavior patterns. The limitations include the relatively low sample size; lack of measurement of mood and use of medication and supplements during social distancing, which may also alter habitual physical activity; and the inability to stablish a cause-and-effect relationship between changes in behavior with social distancing measures, although elements of temporality and plausibility do support our assumptions.
## 5. Conclusions
Imposed social distancing measures to contain the COVID-19 outbreak were associated with decreased physical activity and increased prolonged sitting time, but no changes in clinical symptoms (pain, fatigue, and health-related quality of life) among patients with rheumatoid arthritis. Since this has the potential to increase the burden of cardiovascular diseases in such high-risk group of patients, attention to maintaining their activity levels is an urgent consideration during the pandemic, and possibly thereafter since inactivity and sedentariness may carry over as consequences of the outbreak. |
# Measuring Sleep Quality in the Hospital Environment with Wearable and Non-Wearable Devices in Adults with Stroke Undergoing Inpatient Rehabilitation
## Abstract
Sleep disturbances are common after stroke and may affect recovery and rehabilitation outcomes. Sleep monitoring in the hospital environment is not routine practice yet may offer insight into how the hospital environment influences post-stroke sleep quality while also enabling us to investigate the relationships between sleep quality and neuroplasticity, physical activity, fatigue levels, and recovery of functional independence while undergoing rehabilitation. Commonly used sleep monitoring devices can be expensive, which limits their use in clinical settings. Therefore, there is a need for low-cost methods to monitor sleep quality in hospital settings. This study compared a commonly used actigraphy sleep monitoring device with a low-cost commercial device. Eighteen adults with stroke wore the Philips Actiwatch to monitor sleep latency, sleep time, number of awakenings, time spent awake, and sleep efficiency. A sub-sample ($$n = 6$$) slept with the Withings Sleep Analyzer in situ, recording the same sleep parameters. Intraclass correlation coefficients and Bland–Altman plots indicated poor agreement between the devices. Usability issues and inconsistencies were reported between the objectively measured sleep parameters recorded by the Withings device compared with the Philips Actiwatch. While these findings suggest that low-cost devices are not suitable for use in a hospital environment, further investigations in larger cohorts of adults with stroke are needed to examine the utility and accuracy of off-the-shelf low-cost devices to monitor sleep quality in the hospital environment.
## 1. Introduction
Reports of sleep disturbances are common in adults with stroke and are particularly evident during the acute recovery phase while in hospital [1,2]. Sleep problems are experienced by up to two-thirds of people after having a stroke [3], with 25 to $85\%$ of people experiencing persistent fatigue [4]. Importantly, poor sleep quality has been found to affect neuroplasticity and memory consolidation post-stroke [5,6], which may have an adverse causal impact on a range of recovery outcomes. Moreover, poor sleep is associated with poorer motor function [7] and lower levels of physical activity as well as higher levels of fatigue during inpatient rehabilitation [8]. Finally, higher levels of sleep disruption are associated with slower recovery of functional independence and motor recovery throughout inpatient rehabilitation [1,9]. Sleep disruption is common during a hospital stay, regardless of the reason for admission, most often due to clinical care interventions and the noisy environment [10]. However, these impacts may have a particularly damaging effect for people who have had a stroke by impeding neurological recovery while also reducing the level of energy needed to enable optimal participation in rehabilitation [11].
Despite the known impact of sleep quality on stroke outcomes, sleep monitoring and interventions in the hospital environment during inpatient rehabilitation are not routine. In order to improve clinical outcomes, it is important that we have access to simple and valid methods for sleep monitoring. Wearable actigraphy devices are non-invasive technologies that can monitor and accurately measure objective sleep parameters. These devices have been widely incorporated for the measurement of human biometrics, as they can be used to monitor sleep and activity levels in a range of settings without causing discomfort to the wearer, including the assessment of sleep following stroke [12]. However, these monitors are relatively expensive (~USD 2000 each), preventing their routine use in research and clinical settings [13]. Commercial sleep monitoring devices are becoming more readily available on the personalized-device market and may offer low-cost alternatives to monitor sleep quality [14,15]. Consumer devices for measuring physiological and behavioral signals (e.g., heart rate, respiration, and bodily movements) in order to estimate sleep are either worn (e.g., on the wrist), or they are placed under the mattress or in the same room near the bed. In the context of neurological rehabilitation, devices placed under or near the person whose sleep is being monitored may enable superior assessment of sleep, particularly if they have impairments that affect the movement of specific limbs. The Withings Sleep Analyzer (WSA) is one such “nearable” device (~USD 200 each) that is placed under the mattress to detect body movements, cardiac activity, breathing patterns, and snoring. However, while the WSA records several aspects of sleep (e.g., sleep onset and number and duration of wakenings after sleep onset), it has been used primarily to measure decreases in or cessation of breathing during sleep for people with suspected obstructive sleep apnea syndrome [16], and it has not yet been validated in other clinical populations or settings for the measurement of sleep quality or quantity. Before the device is to be used more widely, it is important to investigate its accuracy in monitoring sleep relative to more common validated actigraphy devices. Therefore, this study aimed to describe device usability and to explore the level of agreement between a validated actigraphy device and the WSA within a sample of adults undergoing hospital-based rehabilitation after having a stroke.
## 2.1. Study Design
The design was a cross-sectional within-subject cohort study. Ethics was approved by the Alfred Health Human Research Ethics Committee (Project ID: $\frac{660}{21}$).
## 2.2. Study Participants
Adults with stroke undergoing inpatient rehabilitation were recruited from two wards: a general medicine rehabilitation ward and a specialized neurological rehabilitation ward. Participants were recruited from two separate wards to capture the differing environments in which adults typically undergo rehabilitation after having a stroke. Adults undergoing stroke rehabilitation on the specialized neurological ward slept in private rooms with more controlled lighting, whereby lighting could be independently dimmed to facilitate onset of sleep. Conversely, adults undergoing rehabilitation on general rehabilitation wards shared a room with one other patient and did not have access to controlled lighting. The potential for noise disturbance was also higher in this shared ward environment. Participants were eligible for the study if they had a diagnosis of stroke, were able to move and roll in bed independently, and did not have any cognitive deficits impacting their capacity to understand and use the sleep monitoring devices.
## 2.3. Procedure
Sleep was monitored for one night. Participants wore the Philips Actiwatch Spectrum 2 (Philips Respironics, Pittsburgh, PA, USA) on their wrist, while the WSA (Withings, Paris, France) was placed under their mattress. The Actiwatch and WSA devices were retrieved by the researchers the following morning to extract sleep data. Participants were also asked to document the time they got into bed with the intention to sleep, the approximate time it took for them to fall asleep, the time they awoke the following morning, and the number of times they awoke overnight. These data were then used to calculate sleep onset latency (SOL), total sleep time (TST), and number of awakenings as described below.
## 2.3.1. Philips Actiwatch Spectrum 2
The Actiwatch was placed on the participant’s wrist on the hemiparetic side. Placing the device on the hemiplegic side ensured that participants could safely and independently remove the device with their unaffected limb if necessary. The Actiwatch software (Actiware version 6.0, Philips Respironics, OR, USA) automatically determined sleep onset and offset times via pre-determined activity thresholds [17]. Within the lights off/lights on times, sleep onset time was classified as the first minute of a 10-minute immobile period with <2 activity counts in any 30-s period [18,19]. Ten consecutive minutes of activity was defined as sleep offset [18,19]. During sleep, activity threshold counts >40 per 30-s epoch were defined as awake. This allowed calculation of the number of awakenings and amount of awake time [18,19].
## 2.3.2. Withings Sleep Analyzer
The WSA is an air-inflated sensor mat that detects body and chest movements and respiration vibrations [16]. Once placed under the mattress, the WSA was paired to application-based software (Version 2151, Health mate, Withings, Paris, France) for data collection and storage. The WSA epoch times and activity thresholds used to detect and calculate sleep and wake are not readily available; however, the mat reportedly detects body movements to determine time spent in bed and time registered as awake and asleep [16].
## 2.4.1. Participant Characteristics
Participant characteristics included age, sex, stroke type, stroke location, rehabilitation ward type, days since stroke, and days in rehabilitation at the time of sleep monitoring. No participants were receiving pharmacological support for sleep.
## 2.4.2. Sleep Parameters and Device Recording
Sleep quality outcome measures were as follows: SOL, defined as time (minutes) between a detected commencement of a rest interval when lights were registered as ‘off’ and the sleep onset time; TST, defined as total time (hours) between sleep onset and offset; number of awakenings, defined as the total number of epoch blocks within the TST interval that were registered as awake; wake after sleep onset (WASO), calculated as the total duration of each awakening episode (minutes); and sleep efficiency (SE), defined as percentage of time spent in bed asleep relative to the total time spent in bed between getting into bed and getting up the following morning.
## 2.4.3. Self-Reported Sleep
A participant questionnaire was developed to record self-reported details of the previous night’s sleep, akin to a single night of a traditional sleep diary, including SOL, TST, number of awakenings, and reasons for awakenings. The questionnaire was administered the next morning by a clinical physiotherapist.
## 2.5. Statistical Analyses
SPSS (version 28.0, IBM, IL, USA) was used to test the level of agreement, within participants, between the Actiwatch and WSA for SOL, TST, number of awakenings, WASO, and SE via intraclass correlation coefficients (ICC) [18]. Absolute agreement with a two-way mixed effect model was used to determine average ICC. This approach was selected, as we assumed that the error from the devices would be predictable and fixed, and the error from the participants would be random. Agreement between the Actiwatch and participant-reported SOL, TST, and number of awakenings were also examined. ICC categories were established a priori to be poor (<0.50), moderate (0.50–0.75), good (0.75–0.90), and excellent (>0.90) [20]. Bland–Altman plots were generated to display the differences between the Actiwatch and WSA devices against the overall mean scores for the two devices, including the upper and lower $95\%$ limits around the combined mean, consistent with previous studies examining the validity of sleep recording devices [14,18,21].
## 3. Results
Eighteen participants wore the Actiwatch for an entire night, with no participants removing the device from their wrist. Ten participants also used the WSA, but data for two participants could not be used due to device pairing issues, and no valid data were collected on the WSA from an additional two participants who slept in a chair instead of the hospital bed. Therefore, actigraph data were available for all 18 participants, and both actigraph and WSA data were available for a subsample of 6 participants.
Participant characteristics are provided in Table 1. There was relative consistency in the characteristics within the entire cohort ($$n = 18$$) who used the actigraphy device as well as in the sub-sample who used both the actigraphy device and the WSA ($$n = 6$$). The characteristic with the greatest discrepancy was the ward type. Of the eight participants who had sleep monitored via both Actiwatch and WSA devices, just one of these participants was recruited from the specialized neurological ward, with the remaining seven recruited from the general ward (Table 1).
Figure 1 depicts the sleep and wakening events for both the Actiwatch and WSA data for one participant, and similar comparisons are available in Supplementary Figures S1–S5 for all other participants. Raw data for all participants who used both the Actiwatch and WSA are provided in Table S1.
ICC levels of agreement for each sleep parameter were poor between Actiwatch and participant-reported sleep quality and between the Actiwatch and WSA (Table 2). The Actiwatch underestimated sleep onset latency and over-estimated total sleep time and awakenings relative to both participant report and the WSA. Overall, there appeared to be poorer agreement between the Actiwatch and WSA than between the Actiwatch and self-report for sleep onset latency. The WSA also showed markedly lower sleep efficiency and wakenings after sleep onset relative to the Actiwatch.
The Bland–Altman plots demonstrated a broad range in the difference between Actiwatch and WSA device data (Figure 2). Participants whose data fell outside of the $95\%$ confidence interval around the combined mean between the Actiwatch and WSA typically had longer sleep onset latency, lower total sleep time, more awakenings, and lower sleep efficiency, highlighting that there was a potential bias towards poor agreement when people had worse sleep.
## 4. Discussion
This study found poor agreement for all sleep parameters between the low-cost WSA and widely used Actiwatch. This study also found poor agreement between participant-reported subjective sleep quality and the Actiwatch device parameters of sleep. Given that devices, such as actigraphy watches, measure different aspects of sleep compared with subjective reports of sleep quality, their poor level of agreement has been well-documented previously [22]. The results of the present study are clinically relevant, as they highlight a need for multiple modalities of effective sleep monitoring in the clinical setting. A low-cost alternative, such as the WSA, which could routinely monitor sleep quality in adults with stroke while undergoing rehabilitation, may inform and assist in guiding rehabilitation programs to optimize recovery; however, we found that this device also did not reliably agree with the well validated Actiwatch.
It is worth noting that previous studies have also found that actigraphic recordings had poorer agreement with polysomnography than other commercially available devices, including an under-the-mattress device in healthy young adults [14]. Importantly, the study by Chinoy, Cuellar, Huwa, Jameson, Watson, Bessman, Hirsch, Cooper, Drummond and Markwald [14] tested the reliability of consumer wearable and “nearable” sleep tracking devices in conditions that could be considered to mimic the type and frequency of overnight disruptions that occur in an inpatient rehabilitation hospital setting. Chinoy, Cuellar, Huwa, Jameson, Watson, Bessman, Hirsch, Cooper, Drummond and Markwald [14] found that the “under-the-mattress” device that they tested overestimated total sleep time by approximately 14 min and sleep efficiency by $2.9\%$, and underestimated time spent awake after sleep onset by 15 min relative to PSG. Similarly, we found that the WSA device underestimated total sleep time by 1.2 h and overestimated time spent awake after sleep onset by 66 min. In order to enable appropriate selection and use of sleep monitoring devices, further large-scale studies comparing the accuracy and reliability of a range of consumer accessible low-cost devices, such as the WSA, against both actigraphy and polysomnography in the context of neurological rehabilitation are necessary.
The discrepancies in sleep parameters between devices, and with self-report, may be due to a number of factors. Firstly, the small sample size may have been a contributing factor, as there was large variability in the data for each of the sleep parameters, and estimates could have been unduly influenced by outliers in the data. Future studies using larger samples of adults with stroke may reduce the observed data variability and develop further insight into whether the WSA and participant reports are comparable to the more expensive devices, which are held to be more accurate. Further, the discrepancies observed may also reflect the different activity thresholds of each device to calculate sleep parameters. Actiwatch activity thresholds are well-defined, readily available, and generally have reported high levels of agreement with polysomnography, particularly for detecting sleep time rather than wakenings, and is considered a ‘gold standard’ for sleep monitoring in settings where it is impractical to use polysomnography [18,21]. Conversely, activity thresholds are not readily available for many consumable devices, including the WSA. Obtaining transparency on the WSA activity thresholds will be key to determining whether it is comparable to established actigraphy devices in future research.
The actigraphy methodology used in this study differed from previous studies monitoring sleep quality in adults with stroke. Firstly, as this was a study investigating the feasibility and usability of the Actiwatch and WSA devices prior to further larger-scale studies, sleep was monitored over one night only. This is not consistent with recent recommendations that sleep monitoring via actigraphy be conducted over a 7–14-day period to account for night-to-night variability in sleep parameters within an individual [23,24]. These recommendations, however, are more relevant to studies investigating clinical understandings of sleep quality. Given that the current validation study focused on the agreement between these two devices, recommendations to monitor sleep for more than one night may not apply in this context.
While the activity thresholds and sleep parameters were comparable, the limb wearing the Actiwatch differed from previous studies. The Actiwatch was placed on the hemiparetic limb for safety reasons, which differed from previous recent studies investigating similar cohorts of adults with stroke [2,8,9,25]. This may have influenced the accuracy of sleep measurements. However, we found that SOL was longer than in previous studies [2,25], TST was comparable to one study [8] yet longer than others [2,25], while WASO was comparable to that found in similar studies [9], shorter than one [25], and longer than another [2]. Potential lower levels of movement in the hemiparetic limb overnight, awakenings in the hospital environment, and WASO may have been underestimated, as wakeful periods overnight may not have resulted in movements of the affected limb. Further, SOL and TST may be overestimated as participants may waken prior to sufficient movements in the hemiparetic limb can trigger the Actiwatch to register that they are awake. However, all participants were able to move independently in and out of bed, minimizing the risk of this occurring. Rather, it appeared that the participants who had poorer agreement between devices had worse sleep (i.e., longer sleep onset latency, lower total sleep time, more awakenings, and lower sleep efficiency), similar to previous studies examining the agreement between consumer sleep monitoring devices with polysomnography [14]. Moreover, it is likely that the discrepancies between devices were due to the small sample size, the inherent variability of one night of sleep monitoring, and the potential differing activity thresholds and sensitivity between devices, which may be more pronounced in people with hemiparesis.
## 4.1. Device Usabiltiy
The non-wearable WSA may offer a low-cost alternative to the Actiwatch device, measuring sleep from cardiac activity, breathing patterns, and snoring in addition to body movements. However, a number of usability issues arose that impacted its application. The WSA required continuous power supply via a plug-in power source at the hospital bedside. Additionally, the device relied upon application-based data storage, so Bluetooth pairing capabilities and reliable internet network connectivity was required on the hospital wards. Given the often-unreliable network connectivity in the hospital environment and variability in proximity of bedside power sources, these requirements presented challenges to data collection, and contributed to the low sample size. Moreover, some patients clearly prefer to sleep in a recliner chair rather than the hospital bed overnight where the WSA device cannot monitor their sleep.
## 4.2. Limitations
As discussed above, there are a number of limitations to the study that may influence the interpretation our findings. The small sample size, single night of sleep monitoring, and actigraphy sleep monitoring on the hemiparetic limb may have contributed to the reported poor level of agreement between the WSA and Actiwatch. However, despite our small sample size, the $95\%$ confidence intervals for the Bland–Altman tests were not dissimilar to those published by Chinoy, Cuellar, Huwa, Jameson, Watson, Bessman, Hirsch, Cooper, Drummond and Markwald [14] for a similar “under-the-mattress” device compared with polysomnography in 19 healthy young adults. While the WSA measures sleep from physiological and behavioral signals in addition to bodily movements, there was very large variability in the WSA estimates relative to the Actiwatch, which determines sleep metrics from limb movements only. These differences may be even more pertinent for people who have had a stroke with hemiparesis. Following on from the methodology developed in this current feasibility study, future studies addressing these limitations by increasing the sample size and wearing the Actiwatch on the non-hemiparetic limb may assist in obtaining new insights into the validity and utility of the WSA as a low-cost alternative to actigraphy for effective sleep monitoring in the hospital environment.
## 5. Conclusions
The findings from the present study suggest potential issues with the usability and accuracy of the WSA for monitoring sleep quality in adults who have been admitted to hospital for inpatient rehabilitation following a stroke. Future studies in larger samples and across multiple nights are needed to further investigate whether under-the-mattress technology can consistently and accurately monitor sleep parameters in adults with stroke. Moreover, developers of consumer devices for monitoring sleep should enable researchers to access their raw data so that the tools can be independently validated for reliable measurement of sleep in research settings and in unique clinical populations. Combining the use of wearable or under-the-mattress devices with subjective reports of sleep quality by adults using standardized outcome measures, together with review of clinical notes on patient sleep quality, may enable low-cost assessment of sleep in an inpatient rehabilitation setting. This may ultimately assist in understanding the nature and impact of sleep disturbances following stroke. Such studies may also enable the development of strategies to reduce the impact of the hospital environment on sleep quality, helping patients to have optimal engagement in rehabilitation programs to facilitate their recovery from stroke. |
# The Association of Perceived Neighbourhood Environment and Subjective Wellbeing in Migrant Older Adults: A Cross-Sectional Study Using Canonical Correlation Analysis
## Abstract
Existing studies often focus on the impact of the neighbourhood environment on the subjective wellbeing (SWB) of the residents. Very few studies explore the impacts of the neighbourhood environment on migrant older adults. This study was conducted to investigate the correlations between perceived neighbourhood environment (PNE) and SWB among migrant older adults. A cross-sectional design was adopted. Data were collected from 470 migrant older adults in Dongguan, China. General characteristics, levels of SWB, and PNE were collected via a self-reported questionnaire. Canonical correlation analysis was performed to evaluate the relationship between PNE and SWB. These variables accounted for $44.1\%$ and $53.0\%$ of the variance, respectively. Neighbourhood relations, neighbourhood trust, and similar values in social cohesion made the most important contributions correlated with positive emotion and positive experience. A link between SWB and walkable neighbourhoods characterized by opportunities and facilities for physical activities with other people walking or exercising in their community, is positively associated with positive emotions. Our findings suggest that migrant older adults have a good walkable environment and social cohesion in neighbourhoods positively correlated with their subjective wellbeing. Therefore, the government should provide a more robust activity space for neighbourhoods and build an inclusive community for older adults.
## 1. Introduction
In recent years, people have paid an increasing amount of attention to individual happiness aside from economic utility and tend to use subjective wellbeing (SWB) to evaluate social progress [1,2]. SWB refers to an individual’s subjective feelings about his or her life, or the overall feeling of happiness in life [3]. Research on SWB is attentive to people’s values, emotions and evaluation, but does not fully recognise the external judgement of behavioural experts [4]. With regards to the study of SWB, early scholars predominantly focused on the impact of individual ‘endogenous characteristics’ on SWB, primarily referring to socioeconomic attributes such as age, gender, education, family income, marital status, and health status [5,6]. Empirical studies have demonstrated that demographic factors can only explain part of the difference in SWB, whilst some scholars began to pay attention to the influence of ‘exogenous factors’, such as social and residential environment on SWB [7,8,9]. There are relatively numerous studies on the influence of social factors on SWB, and they focus on the influence of social support on SWB [10,11,12]. Existing studies have demonstrated that subjective support, objective support, and support utilisation have moderate positive correlations with overall SWB, life satisfaction, and positive emotions, and moderate negative correlations with negative emotions [10,11,12]. There are some studies exploring SWB from the perspective of neighbourhoods. Hooghe and Vanhoutte’s [2011] research discovered that neighbourhoods with a strong homogeneity have a weaker impact on SWB than neighbourhoods with strong heterogeneity [13]. Research conducted by Chen and Ning [2015] revealed that good neighbourhood relationships, frequent participation in activities, convenient shopping, and beautiful landscapes are the primary neighbourhood environmental factors that affect residents’ SWB [14].
As a basic daily living space for urban residents, it is necessary to weigh the factors of the neighbourhood environment that directly affect people’s lives and feelings about it. Existing studies have demonstrated that the neighbourhood influences people’s SWB, however, the factors may be different at various stages of life [15,16]. The effects of the neighbourhood environment are less important in early to middle adulthood since they work and play outside the neighbourhood more often than older adults [16,17]. Compared to young people, older adults have different behaviours, mobility, and perceptions, and the demand for services and facilities can be extremely different [18,19]. These factors may influence and lead to differences in neighbourhood environmental needs and preferences at different stages of life. For numerous older adults, the neighbourhood in which they live is their primary environmental context [20]. The physical and social conditions of the neighbourhood environment may be more important to older adults, especially those who are retired or becoming frail. Thus, they may spend an increasing amount of time with neighbours in their neighbourhood [9,20].
In addition, existing studies often focus on the impact of the neighbourhood environment on SWB for the residents. However, for most people, the neighbour environment is not fixed, especially in recent years when residential migration has become increasingly common [21,22]. As an overall assessment of a person’s long-term quality of life [23], SWB is necessarily linked to life choices such as migration. Studies have demonstrated that the relocation of residence may cause changes in the living environment, which in turn have an impact on SWB [24,25]. From the perspective of spatial dimension, the longer the distance, the more the migrant may experience greater life changes. These changes are not only related to the support degree of the original social network and capital but also related to the challenge of adapting to the new environment [26,27,28]. Some studies believe that residential migration can lead to significant changes in specific areas of life, bringing about changes in life satisfaction [29,30,31]. Residential migration is often accompanied by specific life course events. However, the environmental changes conveyed by migration at different life course stages and the potentially important role of adaptation to the new environment in the relationship between the neighbourhood environment and SWB have received little attention.
Due to China’s urbanisation, a large number of workers and their families from across the country have migrated to work and live in new places, especially those with high economic status, such as Dongguan and Guangzhou, where jobs are more plentiful and lucrative [11,22]. The ‘one-child’ policy has reduced the size of Chinese families to a two parents and one-child ratio. Corresponding with filial piety as a major Chinese traditional value, Chinese older adults’ family members are brought along with their children who migrate for work to new places with the responsibility to take care of their parents. Other reasons for older adults’ migration include taking care of their grandchildren and reuniting with their families [22,32]. Internal migrant older adults in China, who are accompanied by their adult child migrate to the new place, viewed as ‘floating older adults’ or ‘senior drifters’ [5,32]. The number of migrant older adults has grown rapidly due to the persistence of internal migration and ageing trends in China [5,22]. The existing research on migrant older adults has gradually shifted from focusing on the migration patterns to the effects and causes of these people to the quality of life of migrant older adults [33]. For example, taking care of grandchildren within the family will significantly increase the life satisfaction of migrant older adults which has been reported in some studies [5,22]. The social support of the government and the community has a significant positive impact on the social integration of migrant older adults [34]. However, most of the existing research focuses on the fields of sociology and psychology, focusing on the impact of the social environment on the SWB or quality of life of this group, and few studies consider the impact of the neighbourhood environment [8,14]. Social cohesion is a social neighbourhood factor that affects SWB and is particularly relevant to older adults since it is associated with neighbourhood social order and violent crime rates [20]. In fact, after these elderlies migrated to the city, most of their outdoor and social activities were limited to the neighbourhood, which became the most important social support space for these migrant older adults [8]. The range of functional mobility and communication in cities for migrant older adults is much lower than that of local residents, and they tend to spend most of their time close to home [8,9,35]. Therefore, the study of the SWB of migrant older adults should take into account neighbourhood environment factors. Compared to objective indicators of the neighbourhood environment, the present study believes that the relationship between the subjective perception of the neighbourhood environment and SWB is more direct. In explaining SWB, subjectively perceived features of the neighbourhood environment are often more statistically significant than objective descriptions of environmental elements [36].
The attributes of the neighbourhood environment, and their relationship to SWB, are relatively well researched in Western countries [37,38], however, remain largely underexplored in China. Regarding the impact of the neighbourhood environment on SWB, most of the existing studies believe that positive features of the neighbourhood environment (e.g., walkability, availability of public services, and amenities) are associated with positive SWB [11,14], and most of these studies focus on the role of accessibility. For example, good accessibility to parks and green spaces can provide residents with open and natural public spaces, which has a positive impact on SWB [39]. Yet, few studies have explored the relationship between neighbourhood environment and SWB in Chinese migrant older adults. Even fewer studies examine the association of perceived neighbourhood environment (combined physical and social environment attributes) and SWB in Chinese migrant older adults. Is migrant older adults’ SWB associated with the physical and social environment of neighbourhoods where they live? To the best of our knowledge, no studies for the China setting are available to date examining perceived neighbourhood environment and SWB in migrant older adults.
To fill the knowledge gap, the present study aims to investigate the correlation between perceived neighbourhood environment and SWB amongst migrant older adults using canonical correlation analysis, which could inform the design of future interventions. Exploring the unique effects of neighbourhood attributes on migrant older adults’ SWB could be helpful to urban planners and public health officials in their efforts to build age-friendly neighbourhoods. The research will provide a reference and basis for individual behaviour decision making and community planning and governance.
## 2.1. Design
The present study employs a cross-sectional questionnaire survey conducted in Dongguan in South China, to determine the correlation between perceived neighbourhood environment and SWB amongst migrant older adults.
## 2.2. Subjects
This survey was performed amongst migrant older adults in Dongguan city between December 2018 and February 2019. The migrant older adults in this research were defined as any person aged not less than 60 years, those who had moved to Dongguan at least six months prior to the survey and were not listed in the household registration system of Dongguan. An eligible list of migrant older adults for the study was provided by the community committee. A multistage cluster sampling survey technique was used and 470 migrant older adults were invited to take part in the study ($98.2\%$ response rate). In the first stage, four districts were purposively selected out of 33 districts. In the second stage, 22 clusters were randomly selected from 26 communities with a probability proportional to the older adult’s density. In the third stage, within each cluster, migrant older adults were selected randomly.
## 2.3.1. Subjective Wellbeing (SWB)
SWB was assessed by the Memorial University of Newfoundland Scale of Happiness (MUNSH), which has been designed specifically for older adults and has high validity (Kaiser-Meyer-Olkin (KMO) of 0.703) and consistency (Cronbach’s alpha of 0.735) [40]. The MUNSH is a multiitem scale which has 24 items, assessing four dimensions: positive emotion (PA) [e.g., ‘Generally satisfied with the way your life has turned out?’], general positive experience (PE) [e.g., ‘Are you satisfied with your life today?’], negative emotion (NA) [e.g., ‘Bitter about the way your life has turned out?‘], and general negative experience (NE) [e.g., ‘How much do you feel lonely?’]. Numerous items on this scale cover specific content in the geriatric area with reference to age and time of life. Possible responses to each item are ‘yes’ (score 2 points), ‘I don’t know’ (1 point), and ‘no’ (0 points). The total SWB score was then calculated using the equation PA + PE − NA − NE. Total scores range between −24 to + 24 points, where higher scores indicate better SWB [40].
## 2.3.2. Perceived Neighbourhood Environment (PNE)
The perceived neighbourhood environment in the present study consists of the physical and social environment attributes, namely ‘walkability of the neighbourhood’ and ‘social cohesion’. These two environment attributes were assessed by the related module of the Neighbourhood Scales developed by Mujahid [41]. The walkability of the neighbourhood was measured with seven items (The specific items are presented in Table 1), asking the participants if they believed that their neighbourhood offered opportunities and facilities for physical activities, has adequate green space and walkable places, and if they observed other people walking in their neighbourhood. The questionnaire uses a 5-point Likert Scale, ranging from 1 = strongly disagree to 5 = strongly agree with the statements. The Cronbach’s alpha of the original scale was 0.73 [20,41]. The total score ranges from 7–35.
Social cohesion is comprised of four questions asking the respondent about their values such as interpersonal trust, and their relationship with their neighbours. This questionnaire also uses a 5-point Likert Scale, with responses ranging from 1 = strongly disagree to 5 = strongly agree with each statement. The Cronbach’s alpha of the original scale was 0.74 [20,41]. The total score ranges from 4–20.
## 2.3.3. Individual Characteristics
The socio-demographic factors recorded were gender, age, living arrangements, health insurance, and pension status. Living arrangement was categorized as “living with child only”, “living with child and spouse”, “living with child and grandchild”, “living with child, grandchild and spouse”, and “living alone”. Health insurance and pension status were divided into a “have” group and a “haven’t” group. Self-rated health was divided into three ordinal categories: “good”, “fair”, and “poor”.
## 2.4. Data Collection
Nine research assistants (second-year postgraduates) and community staff were trained at a workshop. All interviewers were trained before the formal collection of data by an experienced researcher. The workshop included an introduction to the study and the methods and skills of conducting quantitative interviews. The questionnaires were tested in a pilot study. Face-to-face interviews using the structured questionnaire were conducted. All of the participants were interviewed at their homes using their local language by trained interviewers. Each interview took about 20–25 min. The supervisors checked the completion of the questionnaire during the fieldwork. If information was missing, the interviewer went back to obtain the missing information.
## 2.5. Data Analysis
SPSS V.26.0 software was used to process the data. A Pearson correlation analysis was used to analyse the correlations between the perceived neighbourhood environment variables (X1-X11) and the SWB dimensions (PA, NA, PE and NE). Canonical correlations between the perceived neighbourhood environment variables and the SWB dimensions were analysed after standardising the scores of each variable.
Canonical correlation analysis is an approach that involves the application of structure coefficients as indices for the identification of important indicators. It is a multivariate statistical analysis method used to determine the correlation between two sets of variables using the correlation between the combined pairs of variables to reflect the overall correlation between the two sets of indicators [42]. This paper focuses on the correlation between the two sets of variables of neighbourhood environment and subjective wellbeing, so canonical correlation analysis was chosen.
Canonical redundancy reflects the percentage of variance explained by each canonical variable for each group of variables, If the canonical variables are well representative of the original variables, prediction can be made by canonical correlation. The magnitude of the redundancy analysis indicates the extent to which the pair of canonical variables can explain each other for another set of variances, and it will provide some useful information for further discussion of the relationship between many-to-many [42].
## 2.6. Ethical Considerations
Ethical approval was received from the Institutional Ethics Committee of the Ethics Review Committee of Guangdong Medical University, China (REC: PJ2018037) before the research was conducted. Privacy and data confidentiality were ensured. Voluntary participation and unconditional withdrawal were offered to all participants. A small gift was given as a thank you for their participation.
## 3. Results
The total sample consisted of 470 migrant older adults. Of those, 275 were female (58.5 percent) and 195 were male (41.5 percent). The mean age of the participants was 67.1 years (SD 5.5), with a minimum age of 60, and a maximum age of 87 years. Most participants had fair to good health ($$n = 424$$, 90.2 percent). Most of the migrant older adults lived with their families with an average of more than three members ($$n = 456$$, 97.0 percent). Approximately one-third of migrant participants lacked health insurance ($$n = 135$$, 28.7 percent) and had no pension ($$n = 166$$, 35.3 percent).
Table 2 illustrates the results of SWB variables and PNE variables. For SWB, the mean score (x ± s) of the total scores for SWB was 14.76 ± 8.31. For PNE, the mean score (x ± s) of walkability and social cohesion were 27.34 ± 5.15 and 15.69 ± 2.62, respectively.
The simple correlation analyses of PNE and SWB demonstrated that the correlations ranged between $r = 0.276$ and $r = 0.423$ for PA and PE, indicating there was a moderate level of correlation, whilst NA and NE were negatively correlated with X1-X3 and X8-X11, and it revealed a low level of correlation (Table 3).
The 11 variables X1–X11 of the above simple correlation analysis species were used as the X set, the scores of the SWB dimensions were used as the Y set for typical correlation analysis, and four common variables were obtained (Table 4).
The results revealed that within the four pairs of canonical variables, two pairs of canonical variables were statistically significant (r1 = 0.402, $p \leq 0.0001$ and r2 = 0.257, $p \leq 0.05$), demonstrating that there was a correlation between SWB and the PEN variables. The first pair of canonical variables contained 60.88 percent of the information. The first two pairs of typical variables cumulatively contributed to 82.87 percent of the information.
Table 5 reveals that in the first pair of canonical variables, residents with neighbourhood relations (X9), neighbourhood trust (X10), and similar values (X11) in social cohesion are positively correlated with PA (Y1) and PE (Y3). In the second pair of canonical variables, the walkability of X1, X2, X6, and X7 and the social cohesion of neighbours helping each other (X8) with PA (Y1), NA (Y2) in SWB, are closely correlated to each other.
Redundancy analysis (Table 6) demonstrated that amongst the first pair of canonical variables, U1 could explain 44.1 percent of the total variation in the X variable set and 7.1 percent in the Y variable set, whilst V1 could explain 53.0 percent of the total variation in the Y variable set and 8.6 percent in the X variable set. In the second pair of canonical variables, U2 could explain 3.8 percent of the total variation in the X variable set and 0.3 percent in the Y variable set, whilst V2 could explain 16.2 percent of the total variation in the Y variable set and 1.1 percent in the X variable set.
## 4. Discussion
This study explored the correlation between PNE and SWB among migrant older adults to understand the relative importance and level of the components of PNE and SWB. The results showed that migrant older adults with a high PNE have better PA and PE, which leads to a generally high SWB. This result is in line with previous studies, which suggested that higher PNE leads to higher SWB. Previous studies have confirmed that neighbourhood-built environments (e.g., walkability) and social environments affect older adults’ SWB (e.g., social cohesion) [8,11]. It is possible that this is due to the fact that older adults are more dependent on their neighbourhoods and that changes and adaptations in the neighbourhood environment have a greater impact on their lives [9,20].
Neighbourhood environments matter since they are socially structured and represent differential amenities, including access to physical resources, social support, and relationships [43]. Furthermore, the residential neighbourhood is the older adults’ predominant environmental context, particularly those who are retired or migrated with family [9,20]. Therefore, they likely spend more time increasingly with neighbours in the neighbourhood. Thus, this study provides evidence for the need to reinforce the neighbourhood environment for migrant older adults to improve their SWB by demonstrating a more comprehensive canonical correlation between the eleven elements of PNE and SWB.
The social environment attributes of PNE, their relationship with their neighbours, interpersonal trust, and sharing the same values are associated with positive emotions and wellbeing experiences, consistent with previous studies [8,20,43]. Older adults may be more affected by neighbourhood characteristics (e.g., social cohesion) than younger adults who have been reported in some studies as being more concerned about environmental pollution and neighbourhood beautification [44,45]. Social cohesion is an aspect of the neighbourhood’s social environment influencing individual health-related behaviours such as physical and recreational activities [46]. Social cohesion refers to the absence of potential social conflict and the presence of strong social bonds—usually measured by levels of trust and reciprocity norms [47].
Cohesive neighbourhoods may be better for reinforcing positive social norms for health behaviours, leading to quicker adoption of new residents since neighbours know and trust each other [11,12,22]. In addition, neighbours who trust one another are more likely to provide help and support in times of need, particularly for migrant older adults who face the dilemmas of losing geopolitical ties and have difficulty integrating into new cities [20,22]. Research has demonstrated that people may only trust those in the same in-group and may not participate in social activities outside their circle [48]. Therefore, migrant older adults who share the same values as neighbours are more likely to establish good relationships and trust each other, which leads to promoting their SWB.
The current study used SWB to examine the association of physical neighbourhood attributes and walkability. We found a link between SWB and walkable neighbourhoods characterized by opportunities and facilities for physical activities with other people walking or exercising in their community, positively associated with positive emotions and negatively associated with negative emotions. Previous studies showed neighbourhood walkability is related to leisure time physical activity among Chinese and U.S. older adults [49,50]. Walking is correlated with both improved physical and emotional health [51]. In addition, researchers found a link between walkable neighbourhood attributes that include land use diversity and well-connected transportation networks with more walking, less obesity, and lower coronary heart disease risk [52,53].
Migration, retirement, and other major life events tend to create anxiety, pessimism, depression, and other native emotions in migrant older adults [5]. However, a good walking environment provides migrant older adults with conditions for exercise and creates a platform for the older adults to communicate with their neighbours. Through walkable neighbourhoods, migrant older adults can avoid the “social isolation” phenomenon and “social insularity” caused by long-term absence from home [54]. It also helps them to improve their self-worth and maintain positive mental health while participating in social activities [54]. For example, Wiles [2012] found that a high-quality physical neighbourhood environment enhances wellbeing [55]. This effect is due to people having an innate emotional connection to their neighbourhood environment, and open spaces can increase social interaction.
This study is meaningful since our comprehensive analyses factored in the various elements of PNE to demonstrate the canonical correlation between PNE and SWB of migrant older adults. The study extends this prior research by focusing specifically on perceived neighbourhood environments—defined in this study as being the combined physical and social environments—of Chinese migrant older adults [9,11,12]. The statistical approach of using canonical correlation analysis is appropriate for identifying the associations between the two sets of variables of the physical and social environments, measured as walkability and social cohesion, with subjective wellbeing. Our study emphasized that positive physical and social environments are likely to contribute to the positive subjective wellbeing of the elderly. The findings could provide evidence to help governments design healthy ageing policies to improve the SWB at the community level. The findings also could potentially be expanded to other population groups. Positive physical and social environments are likely to contribute to positive subjective wellbeing beyond migrant older adults. However, this study has some limitations. First, we collected cross-sectional data based on self-reports. Thus, we cannot address the causality direction. Second, we conducted this study in only one city, which may not represent all migrant older adults in China. Therefore, future research should consider well-designed multicentre prospective studies of neighbour correlates of SWB. Third, the data collection instruments (i.e., focus on walkability) lacks accounts of other physical dimensions and amenities (i.e., health centres, banks, elderly activity centres, and parks) that support older people’s wellbeing and these could be highlighted as potential venues for further research. Finally, we did not assess the effect of changes in the socioeconomic status of the whole family on SWB and the participants’ ranges for time elapsed since migration which could affect migrant older adults’ SWB. Future research studies could explore this further.
There are some policy implications in this study’s findings. Our study suggests that physical and social attributes of neighbourhoods are strongly associated with migrant older adults’ SWB. Previous studies found that migrant older adults’ restricted access to social benefits and social relations was detrimental to their mental health [11,12]. Our findings confirm this point and further suggest that migrant older adults have a good walkable environment and social cohesion in neighbourhoods positively correlated with their subjective wellbeing. Therefore, the government should provide a more robust activity space for the neighbourhood and optimize the quality of life for older adults.
In addition, migrant older adults should be encouraged to participate in community activities to enrich their lives and improve their SWB. Finally, they could improve their wellbeing through inclusive community building. This approach requires breaking the closure and exclusion in the configuration of community power. Eliminating the identity segregation and social exclusion of residents in sharing community resources and promoting good neighbourliness among older adults with different identities and backgrounds will enhance migrant older adults’ sense of community cohesion and community belonging.
## 5. Conclusions
As residential migration becomes more common, the neighbourhood environment inevitably changes, and one needs to adapt to the new neighbourhood environment. This paper focused on older adults after residential migration. Initially, it explored the relationship between the new neighbourhood environment and SWB after migration, enriching the study of the relationship between neighbourhoods and SWB. However, we need further indepth analysis of how changes in the neighbourhood environment before and after residential migration affect older adults’ wellbeing and the critical role of other aspects of residential migration. |
# Association between Anemia Severity and Ischemic Stroke Incidence: A Retrospective Cohort Study
## Abstract
Stroke patients presenting with anemia at the time of stroke onset had a higher risk of mortality and development of other cardiovascular diseases and comorbidities. The association between the severity of anemia and the risk of developing a stroke is still uncertain. This retrospective study aimed to evaluate the association between stroke incidence and anemia severity (by WHO criteria). A total of 71,787 patients were included, of whom 16,708 ($23.27\%$) were identified as anemic and 55,079 patients were anemia-free. Female patients ($62.98\%$) were more likely to have anemia than males ($37.02\%$). The likelihood of having a stroke within eight years after anemia diagnosis was calculated using Cox proportional hazard regression. Patients with moderate anemia had a significant increase in stroke risk compared to the non-anemia group in univariate analyses (hazard ratios [HR] = 2.31, $95\%$ confidence interval [CI], 1.97–2.71, $p \leq 0.001$) and in adjusted HRs (adj-HR = 1.20, $95\%$ CI, 1.02–1.43, $$p \leq 0.032$$). The data reveal that patients with severe anemia received more anemia treatment, such as blood transfusion and nutritional supplementation, and maintaining blood homeostasis may be important to preventing stroke. Anemia is an important risk factor, but other risk factors, including diabetes and hyperlipidemia, also affect stroke development. There is a heightened awareness of anemia’s severity and the increasing risk of stroke development.
## 1. Introduction
Stroke is a leading cause of death and disability worldwide [1]. Stroke survivors suffer from various impairments and complications affecting motor, sensory, visual, language, and cognitive functions [2,3]. Therefore, a stroke imposes a great burden on patients as well as their caregivers and family members. Stroke patients may be hospitalized or may frequently visit the emergency department owing to their long-term sequelae and disability, which not only dramatically increases the burden on caregivers and their family’s finances, but also severely affects their quality of life. There are numerous recognized risk factors for stroke, such as hypertension, hyperlipidemia, diabetes mellitus, cigarette use, obesity, age, and physical activity [1,4,5]. Increases in the elderly population and life expectancy are also key reasons for the increase in number of stroke patients.
Anemia affects 15–$32\%$ of the world’s population, is usually present in stroke patients, and can worsen with aging [6,7]. In 2019, the age groups of 15 to 19 and 95 and older, for both males and females, had the highest global point prevalence of anemia. The mean (range) global prevalence rates of mild, moderate, and severe anemia were approximately $54.1\%$, (53.8–$54.4\%$), $42.5\%$ (42.2–$42.7\%$), and $3.4\%$ (3.3–$3.5\%$), respectively [8]. Elderly individuals may experience malnutrition and dyspepsia as their physical condition deteriorates with age, and this may affect their hematopoiesis functions, thereby causing anemia or pancytopenia. Anemia is also a risk factor for ischemic stroke and is related to high post-stroke mortality [9,10].
Nevertheless, previous research has suggested that anemia may raise the risk of stroke. However, the new stroke guidelines from the American Stroke Association (ASA) do not list anemia as a major stroke risk factor [11]. Here, we conducted a retrospective cohort study to investigate the association between the severity of anemia and stroke incidence. Owing to Taiwan’s National Health Insurance (NHI) policy, anemia is rarely listed as a primary condition and may not be documented on patient medical records on the basis of International Classification of Diseases, Tenth Revision (ICD-10) codes. The laboratory data of anemia status were not available in Taiwan’s NHI system, and the prevalence of anemia could be underestimated. Moreover, the data of association between anemia and comorbidities in the Taiwanese population are scarce. An evaluation of the stroke risk factors, especially anemia severity, could provide important information that may enhance medical care or even national healthcare planning. This study retrospectively evaluated the prevalence and characteristics of anemia in hospitalized patients and analyzed whether anemia severity based on the hemoglobin (Hb) level was associated with stroke development.
## 2.1. Study Cohort
This retrospective cohort study included 454,424 patients aged ≥20 years who had visited or were hospitalized at Taichung Tzu-Chi Hospital, Taiwan, from 2013 to 2019. A total of 71,787 patients underwent at least 1 blood Hb measurement performed using a Sysmex XE-5000 hematology analyzer (Sysmex Co., Kobe, Japan) within 1 year to confirm their anemia status. This study was approved by the Research Ethics Committee of Taichung Tzu-Chi Hospital (REC 111-02). The need for informed consent was waived owing to the retrospective nature of the study and the use of anonymous medical records.
## 2.2. Definition of Anemia and ICD Codes
Adult patients older than the age of 20 were included in this study. All participants in this study completed at least one Hb measurement, and persons who did not fulfill the predetermined criteria were not included. The date of laboratory Hb measurement was defined as the index date, and the anemia severity was classified according to the World Health Organization (WHO) criteria [12]. We categorized the patients into different groups according to their anemia severity. Anemia is defined as an Hb level of <13.0 g/dL for men and <12.0 g/dL for women. The cutoff for Hb in mild anemia was 11.0–11.9 g/dL for women and 11.0–12.9 g/dL for men, whereas the cutoffs for moderate and severe anemia were 8.0–10.9 and <8.0 g/dL, respectively, for both men and women. As shown in Figure 1, the exclusion criteria were as follows: [1] patients without Hb measurements; [2] receiving a diagnosis that might affect the Hb status, including gastric intestinal bleeding (ICD-10 code K92.2), bleeding (ICD-10 code R58), trauma (ICD-10 code T79.2), excessive bleeding associated with menopause onset (ICD-10 code N92.4), intraoperative and postprocedural complications of spleen, endocrine, and nervous system (ICD-10 code D78, E36, G97), excessive bleeding with onset of menstrual bleeding (ICD-10 code N92.2), traumatic hemorrhage of the cerebrum (ICD-10 code S06.360A), hemorrhage from respiratory passages (ICD-10 code R04.9), nontraumatic intracerebral hemorrhage (ICD-10 code I61.9), spleen diseases (ICD-10 code D73), pulmonary vessels diseases (ICD-10 code I28), stomach and duodenum diseases (ICD-10 code K31), acute myocardial infarction (ICD-10 code I21), injury to an unspecified body region (ICD-10 code T14), or absent, scanty, or rare menstruation (ICD-10 code N91), before their index date until anemia diagnosis; [3] receiving a stroke diagnosis before the index date on the basis of the ICD-10 codes I63; [4] not visiting our out-patient clinic or being hospitalized within the last 2 years; and [5] death or leaving against medical advice (DAMA) less than 1 month after the index date.
A flowchart of the patient enrollment process is illustrated in Figure 1. All patients were grouped by sex and age (20–30, 31–40, 41–50, 51–60, 61–70, 71–80, and >80 years). The *Hb status* confirmation date was identified as the index date for the case and control groups, and stroke events were followed subsequently.
## 2.3. Outcome and Associated Factors
The eligibility of all patients was retrospectively determined in this cohort study. The severity of anemia was then subgrouped based on Hb level, and the stroke patients were those who had at least two ICD-10 admission claims for clinic OPD visits or stroke-related hospitalization in our hospital during the study period. During the monitoring period, the occurrence of subsequent disease was examined. The occurrence of subsequent disease was analyzed during the observation period. Patients were individually tracked for 2–8 years, beginning on the index date, and followed thereafter. In this study, the outcome of stroke was defined as admission claims of ICD-10 codeI63, cerebral infarction. The accuracy of diagnoses from claims data was verified in a previous study showing that the PPV and sensitivity of ICD-10-CM code I63 as a primary diagnosis of acute ischemic stroke were $92.7\%$ and $99.4\%$, respectively [13]. We also analyzed the hazard ratio for comorbidities that were potentially linked to stroke: hypertension (I10–I13, I15), diabetes (E08–E11, E13), chronic kidney disease (CKD; N17–N19, I12, I13), chronic heart failure (I50), chronic obstructive pulmonary disease (J44, J60–70), hyperlipidemia (E78.0-E78.5), and atrial fibrillation (I48). The comorbidities were defined as the presence or absence of accompanying disease within one year before the index date of anemia. The national health insurance program (NHI) in *Taiwan is* mandatory for all citizens, and various medications and medical procedures were coded with unique code. In this study, six frequently prescribed drugs were included to investigate the efficacy of various anemia therapies for patients within six months after the hemoglobin measurement index date. These medications included iron (hydroxide-polymaltose complex, Yuanchou Chemical and Pharmaceutical Co., Ltd., Taiwan, NHI code AC46166100), ferric hydroxide sucrose complex (TCM Biotech international Corp. Taiwan, NHI code AC57884221), sodium ferrous citrate (Guang Heng Enterprise Co., Ltd. Taiwan, NHI code BC22097100), hydroxocobalamin (Shinlin Sinseng Pharmaceutical Co., Ltd. Taiwan, ACETATE, NHI code AC09754209), mecobalmin (Eisai Taiwan Inc., NHI code AC296301G0), folic acid (Johnson Chemical Pharmaceutical works Co., Ltd. Taiwan, NHI code AC346701G0), and blood transfusion (NHI code 94001C).
## 2.4. Statistical Analysis
Statistical analyses were conducted using the SAS statistical package (Version 9.4) and SPSS (version 28.0, SPSS Inc., Chicago, IL, USA) to examine the prevalence and clinical trends of anemia among the different age groups, sexes, and comorbidities. The categorical variables were assessed by applying a Chi-square test. The continuous variables were assessed by applying a t test. Furthermore, different predictors were used to estimate relative risks [14]. To examine the stroke risk associations with anemia, the deaths as competing risks of stroke were analyzed by using a Cox proportional cause-specific hazard model to calculate hazard ratios (HR), $95\%$ confidence intervals (CIs), and two-sided p values. A two-sided p value of <0.05 was considered statistically significant. A multivariate Cox proportional cause-specific hazard regression model was adjusted for age, sex, and comorbidities. A proportional hazard assumption was evaluated by the Kolmogorov-type Supremum test; that was not violated.
## 3. Results
As shown in Figure 1, only 71,787 of the 454,424 patients who visited our facility qualified for the retrospective cohort research. The baseline characteristics of the case and control groups are summarized in Table 1. The mean Hb level was 14.2 ± 1.3 g/dL in the normal group and 10.7 ± 1.6 g/dL in the anemia group.
Of the 16,708 anemia patients, 6185 ($37.02\%$) were men and 10,523 ($62.98\%$) were women. The mean age of the case group was 59.1 ± 18.5 years, and that of the control group was 50.6 ± 16.3 years. The case group had a higher incidence of comorbidities, including hypertension ($11.80\%$ versus $20.40\%$, $p \leq 0.001$), diabetes ($6.74\%$ versus $14.77\%$, $p \leq 0.001$), CKD ($0.90\%$ versus $6.07\%$, $p \leq 0.001$), chronic heart failure ($0.91\%$ versus $2.67\%$, $p \leq 0.001$), chronic obstructive pulmonary disease ($2.10\%$ versus $2.96\%$, $p \leq 0.001$), and atrial fibrillation ($0.49\%$ versus $1.01\%$, $p \leq 0.001$), than did the control group.
Table 2 presents the anemia severity and subsequent cases of stroke. The patients with anemia were further divided into three subgroups according to anemia severity, determined on the basis of Hb levels by WHO criteria [12]. Thus, of the 16,708 patients with anemia, 9065 ($54.25\%$) had mild anemia, 6532 ($39.09\%$) had moderate anemia, and 1111 ($6.65\%$) had severe anemia. During follow-up, a total of 447 anemia patients ($2.68\%$, $\frac{447}{16}$,708) and 744 controls ($1.35\%$, $\frac{744}{55}$,079) were diagnosed as having stroke. Moreover, there were 740 non-anemia patient deaths and 1229 anemia patient deaths throughout the 8-year follow-up period ($1.34\%$ and $7.63\%$, respectively).
We observed a positive association between the severity of anemia, determined based on Hb measurements, and the risk of stroke. Figure 2 illustrates the cumulative incidence of stroke in the three subgroups of anemia severity during the 8-year follow-up. A higher incidence of stroke events was noted in the patients with moderate anemia after their diagnosis during the 8-year follow-up (log-rank test, $p \leq 0.001$).
Table 3 illustrates the univariate and adjusted associations between the risk of stroke and the severity of anemia, sex, age, and comorbidities. The risk of stroke was higher in the case group than in the control group. In univariate regression analysis, we found moderate anemia (HR = 2.31; $95\%$ CI, 1.97–2.71) had a significant increase in stroke risk compared to the non-anemia group. After adjusting, we found the risk of stroke was higher in the patients with moderate anemia (adj-HR, 1.20; $95\%$ CI, 1.02–1.43, $$p \leq 0.032$$) than in the controls. The same results were obtained for gender and age by both univariate analysis (HR = 1.66, $95\%$ CI = 1.48–1.87, $p \leq 0.001$; HR = 1.07, $95\%$ CI = 1.07–1.08, $p \leq 0.001$, respectively) and adjusted HRs (adj-HR = 1.64, $95\%$ CI = 1.46–1.85, $p \leq 0.001$; adj-HR = 1.07, $95\%$ CI = 1.065–1.074, $p \leq 0.001$, respectively). Furthermore, the case group had a higher prevalence of comorbidities than did the control group. However, only the comorbidities diabetes mellitus and hyperlipidemia, by both univariate analysis (HR = 2.86, $95\%$ CI = 2.50–3.28, $p \leq 0.001$; HR = 1.89, $95\%$ CI = 1.54–2.31, $p \leq 0.001$, respectively) and adjusted HRs (adj-HR, 1.48; $95\%$ CI, 1.27–1.71; $p \leq 0.001$), (adj-HR, 1.13; $95\%$ CI, 0.91–1.39; $$p \leq 0.280$$), were associated with a higher risk of stroke in the case group compared to the control group.
## 4. Discussion
This retrospective study evaluated the prevalence and characteristics of anemia and the risk of stroke. The strength of this study is that it identified the association between anemia and the risk of stroke by using a hospital-based database, from which the laboratory data were retrieved to classify the severity of anemia. In contrast to previous studies, which have estimated the risk of stroke associated with anemia by using data from Taiwan’s NHI databases based on ICD codes and lacked conclusive laboratory Hb measurements [15,16], our study analyzed laboratory data and classified the patients into subgroups according to the severity of anemia to assess the associations between anemia severity and the risk of stroke. We also excluded patients with diseases that might interfere with our results, including those with a tendency of bleeding, other hemorrhagic disease, and persons who did not fulfill the predetermined criteria were also excluded. All participants in this study completed at least one Hb measurement, and persons who did not fulfill the predetermined criteria were then excluded. Our findings indicate that patients with moderate anemia showed an increased likelihood of stroke development.
In this retrospective analysis, there were more female anemic patients than male anemic patients. In the initial stage, the primary signs of mild anemia include fatigue, light skin, dizziness, debility, and headaches. Patients in the early stage of anemia or mild anemia may not seek medical care or consultations with physicians, particularly middle-aged men. Many male patients did not meet the criteria for hospital visits in 2 years. On the other hand, most women experience menopause at the age of 40–50 years; thus, some anemia symptoms, such as dizziness, fatigue, or paleness, may be overlooked or misdiagnosed as menopausal symptoms. Even when individuals visit a hospital or clinic, medical personnel tend to focus more on other maladies rather than anemia. However, if anemic condition is left untreated for a longer period, the consequences and complications can become more severe, causing shortness of breath, low blood pressure, arrhythmia, and even chronic heart failure. Results from this research demonstrated an increased risk of stroke occurrence in moderate anemia patients compared with the non-anemia control group. Additionally, the mortality rate in the severe anemia group was $12\%$, much higher than that of other patients with anemia in this study. Patients suffering from severe anemia might die from other illnesses caused by their feeble condition prior to having a stroke. As a result, the risk of stroke in the severe anemia group was observed to be lower than in the moderate anemia group.
According to statistical data from Taiwan’s Ministry of the Interior, the population aged >65 years increased from $11.15\%$ in 2012 to $16.68\%$ in 2021. In the past two decades, the average life expectancy also increased from 76.75 to 81.30 years. The Council for Economic Planning and Development estimated that Taiwan will become a super-aged society by as early as 2025; moreover, the population aged ≥65 years is expected to account for >$20\%$ of all individuals [17]. This accelerated speed of aging has become a burden to the healthcare system and society. In this study, there is an upward trend in the prevalence of anemia with age (from $6.72\%$ in the 20–30 age range to over $15\%$ in the elderly age groups; Table 1). Our results are consistent with the global prevalence of anemia, indicating that the trend of anemia burden increases with age [18,19,20]. We observed that the anemia prevalence peaked at $17.3\%$ in the 71–80 age group and at $14.4\%$ in the >80 age group. Anemia rates in the 71–80 age range in this study cohort were $4.2\%$ ($\frac{2990}{71}$,787) and $3.4\%$ ($\frac{2411}{71}$,787), respectively. In this study, the prevalence of anemia in people over 60 is approximately $11\%$, which is lower than it is in other Asian countries, such as Korea, where it is $13.8\%$ for people over 65 [20]. Anemia is a common condition in older adults and can be caused by various factors such as poor nutrition, chronic diseases, medication, and healthcare. Taiwan has a relatively high standard of living, and the population has access to a variety of nutritious foods, which helps to prevent nutrient deficiencies, including iron deficiency. Moreover, Taiwan has a well-developed healthcare and medical insurance system. The Ministry of Health and Welfare also promotes and encourages all citizens above the age of 45 to participate in adult health checkup programs. These programs enable the early detection of diseases such as cancer and other chronic disease, as well as delivery of comprehensive healthcare prior to the disease worsening [17]. Therefore, all those factors may contribute to reduce the overall prevalence of anemia in the population. In elderly people, anemia has been reported to be associated with cardiovascular disease [21], stroke [6], dementia [22], frailty [23], and high morbidity as well as mortality [24]. Because of Taiwan’s NHI policy, however, anemia has rarely been listed as a primary condition in elderly people. According to the WHO recommendation, an anemia prevalence of >$5\%$ is considered to be of public health significance [12] and may require public health attention and intervention. The increased prevalence of anemia in the elderly should be considered an important public issue in Taiwan.
In this study, we also observed a higher prevalence of pre-existing comorbidities among the anemia group compared to the non-anemia population. The moderate to severe anemia patients had higher all-cause mortality compared to the non-anemia group; this trend was mentioned in previous studies [9,25]. Other unreported comorbidities may interfere with the association between anemia and stroke. Severe anemia might be corrected well, but mild to moderate anemia might become a chronic condition which eventually becomes associated with stroke. In this study, we observed that patients with severe anemia required blood transfusions more frequently than the group with moderate anemia and the control group. However, a study by Dr. Ren that was published in Nature Communications raises the possibility that blood transfusions might be advantageous to health even up to seven hours after a stroke in a mouse model. Their team discovered that replenishing $20\%$ of the mouse’s blood was sufficient to significantly lessen brain damage [26]. However, there are few studies focusing on maintaining hemodynamic condition in severe anemia patients to prevent stroke. Therefore, more studies might help to clarify the benefit from blood transfusions on this issue in the future. Furthermore, the different therapeutic strategies may explain why severe anemia portends lower stroke risk than other anemia severities.
Studies assessing the association between anemia and comorbidities in the Taiwanese population are rare. Anemia, a direct consequence of decreases in Hb and red blood cell (RBC) levels in circulation, is a multifactorial condition; lack of iron, folate, and vitamin B12 are well-known causes of anemia. The most common type of anemia is iron deficiency anemia, which may account for as much as $50\%$ of all explained anemia cases [27]. Other diseases such as diabetes, chronic infections, inflammation, and CKD also affect RBC proliferation, erythropoietin production, androgen secretion, and myelodysplasia [28]. Anemia is also positively associated with impaired renal function. Taiwan has one of the highest number of cases of CKD and end-stage renal disease in the world; CKD is the most frequent cause of anemia [8,20,21,29]. The severity of anemia is directly related to the degree of renal dysfunction. CKD causes reduction in erythropoietin synthesis, subsequently resulting in decreased cell proliferation. At least one-third of anemia patients aged >65 years have CKD or autoimmune diseases/chronic infection [30]. Patients with CKD are also at a significant risk for stroke, including the ischemic and hemorrhagic subtypes. The mechanisms linked to higher risk of stroke in CKD patients include alterations in cardiac output, platelet function, regional cerebral perfusion, accelerated systemic atherosclerosis, altered blood brain barrier, and disordered neurovascular coupling [31]. Additionally, Dr. Poznyak also identified the atherosclerosis-specific features in chronic kidney disease (CKD) in a recent study [32]. The major symptoms of anemia may range from mild fatigue to severe systemic illnesses. In addition, accumulating evidence indicates that anemia engenders outcomes such as increased stroke [9], heart failure [33], hospitalization [25], and mortality [34], all of which impose a severe burden on healthcare systems. Furthermore, anemia is associated with increased iron overload, increased chances of viral infection [35], and increased risks of myocardial infarction [36]. We also analyzed other known conventional risk factors, such as hyperlipidemia and atrial fibrillation, that affect the development of stroke; the hazard ratio was slightly different to other investigations [9]. Hyperlipidemia is an important risk factor for stroke [4,37]. Atrial fibrillation (AF) is a frequent cardiac rhythm disease associated with various significant negative health outcomes, such as heart failure and stroke. Particularly in women, atrial fibrillation is linked to an increased long-term risk of stroke, heart failure, and all-cause death [38,39]. Many investigations also revealed that anemia is a frequently observed comorbidity in patients with AF and is associated with cardiovascular, stroke, and gastrointestinal bleeding [40].
In medical practice, those experiencing moderate to severe anemia are more likely to receive medical attention than those with mild anemia. This means that patients with moderate to severe anemia with signs of illness symptoms would be given blood transfusions, iron supplements, and vitamin B12, while mild anemia would more likely be overlooked [41,42,43]. Regarding the management of anemic patients, blood transfusions are often seen as an effective way to increase hemoglobin levels and improve their overall health. In this study, we examined patients who received transfusions and pharmacological therapy within six months of the diagnosis index date. According to our results, patients with moderate and severe anemia received a greater proportion of blood transfusion than those with mild anemia ($24.14\%$, $61.12\%$ vs. $11.22\%$, Table 1). Blood transfusions can maintain in the body’s hemodynamics and alter the viscosity of the blood. Keeping the blood in balance in the body’s circulation and offering better care may be a strategy to prevent stroke. However, blood transfusion is influenced by a number of circumstances and the decision of the healthcare professionals. Patients who receive frequent transfusions may also be exposed to an increased risk of stroke. To ascertain the beneficial effects of anemia therapies such as transfusion and other medication on reducing the chance of stroke, further research must be conducted.
Despite its strengths, our study has some limitations that should be noted. First, the different types of anemia, such as iron deficiency anemia or folic acid anemia, were not correctly defined in this study. Second, we could not analyze data regarding lifestyles or socioeconomic status, such as smoking, alcohol habits, obesity, education, or financial condition. Third, in order to confirm the validity of the diagnosis for anemia, we only included the patients with one Hb measurement, which could cause a potential selection bias in a retrospective study. The medical service of our hospital serves a population of approximately 2.8 million in the center area of Taiwan, and more than 700,000 clinical visits are made each year. Finally, we did not retrieve clinical data on atherosclerosis, nutrition, pregnancy, or endogenous hormones, which might be predisposing factors for stroke and the retrospective data from the hospital might still miss a few stroke patients who were diagnosed in other hospitals or died at home.
## 5. Conclusions
This study assessed the association between anemia and the risk of stroke. The prevalence of anemia was found to increase with age. A high prevalence of anemia is expected to impose a major medical burden in countries becoming super-aged societies. In this study, the risk of stroke was found to be associated with age, regardless of sex. Our study reveals that moderate anemia should be considered an increased risk factor associated with stroke incidence, and monitoring anemia severity as well as other risk factors and biomarkers is crucial in clinical practice. |
# Fluid Intake and the Occurrence of Erosive Tooth Wear in a Group of Healthy and Disabled Children from the Małopolska Region (Poland)
## Abstract
Background: The aim of this study was to analyse the relationship between the type and amount of fluid intake and the incidence of erosive tooth wear in a group of healthy children and children with disabilities. Methods: This study was conducted among children aged 6–17 years, patients of the Dental Clinic in Kraków. The research included 86 children: 44 healthy children and 42 children with disabilities. The prevalence of erosive tooth wear using the Basic Erosive Wear Examination (BEWE) index was assessed by the dentist, who also determined the prevalence of dry mouth using a mirror test. A qualitative-quantitative questionnaire on the frequency of consumption of specific liquids and foods related to the occurrence of erosive tooth wear, completed by the children’s parents, was used to assess dietary habits. Results: The occurrence of erosive tooth wear was determined for $26\%$ of the total number of children studied, and these were mostly lesions of minor severity. The mean value of the sum of the BEWE index was significantly higher ($$p \leq 0.0003$$) in the group of children with disabilities. In contrast, the risk of erosive tooth wear was non-significantly higher in children with disabilities ($31.0\%$) than in healthy children ($20.5\%$). Dry mouth was significantly more frequently identified among children with disabilities ($57.1\%$). Erosive tooth wear was also significantly more common ($$p \leq 0.02$$) in children whose parents declared the presence of eating disorders. Children with disabilities consumed flavoured water or water with added syrup/juice and fruit teas with significantly higher frequency, while there were no differences in quantitative fluid intake between groups. The frequency and quantity of drinking flavoured waters or water with added syrup/juice, sweetened carbonated, and non-carbonated drinks were associated with the occurrence of erosive tooth wear for all children studied. Conclusions: The group of studied children presents inappropriate drinking behaviours regarding the frequency and amount of beverages consumed, which, especially in a group of children with disabilities, may contribute to the formation of erosive cavities.
## 1. Introduction
The disabled population is a group at an increased risk of oral diseases. The reasons for this phenomenon can be found not only in the existence of numerous barriers to access to dental care but also in difficulties in implementing proper dietary and hygienic habits in this group of people. People with disabilities have limited access to health services, including routine treatment, which leads to non-disability-related health inequalities [1].
Difficulties with swallowing, eating, salivating, chewing, and unsatisfactory overall oral aesthetics may be present among people with Down syndrome, the most common genetic cause of intellectual disability. A higher prevalence of periodontal lesions has been identified in this group of individuals, which may be caused by the patient’s self-injury to oral tissues. A higher incidence of dental caries was also observed in the group of people with disabilities due to different craniofacial anatomy, functional disorders, or parafunctions. Children with physical and intellectual disabilities constitute a group that needs early and regular dental care in order to prevent and limit the severity of the pathologies observed [2,3,4].
According to the 2014 Polish Population Health Survey, disabled persons, by Polish criterion in the age group 0–14 years old, constituted $3.7\%$ of the total. The data showed that the largest group of children with disabilities was recorded among 10–14-year-olds ($5\%$), among 5–9-year-olds ($4\%$), and less than $3\%$ among the youngest children. More children with disabilities lived in urban areas than in rural ones, 140,000 vs. 72,000, respectively [5].
Dental erosion is the dissolution of dental hard tissues caused by acids of a non-bacterial origin. Erosive tooth wear is tooth wear with dental erosion as the primary etiological factor. As erosive tooth wear has serious long-term implications, it is important to establish its prevalence and its associated and aetiological factors [6]. The development of erosive tooth wear lesions may depend on internal factors, such as the state of health, the structure of the tooth, the structure and amount of saliva produced, as well as on external factors, mainly eating and drinking behaviour [7].
A systematic review presented that citrus fruits had a significant positive relationship with dental erosion. In addition, carbonated drinks and the consumption of acidic drinks at bedtime increased the risk of erosive tooth wear in adolescents. For sport/energy drinks and fruit juice, results were inconclusive [8].
Dental erosion has been considered an oral manifestation of eating disorders (i.e., anorexia, bulimia) associated with vomiting practices. The meta-analysis presented that patients with eating disorders and with risk behaviour of eating disorders had more risk of erosive tooth wear [9].
The literature suggests [10,11] that pathological conditions characterised by reduced salivary flow, i.e., salivary gland inflammation, Sjögren’s syndrome, or other symptoms, are factors that may influence the formation and development of dental erosion. The composition of saliva is particularly important in protecting against erosive processes, and normal salivary flow enables the dilution of acid concentrations of non-bacterial origin.
Dental erosion affected $42.3\%$ of the participants in the young adult Polish population and $24.3\%$ of the 15-year-old adolescent population [12,13]. To the best of the authors’ knowledge, the evaluation of factors influencing the development of erosive tooth wear among children with disabilities in Poland has not yet been conducted.
The aim of this study was to analyse the relationship between the type and amount of fluid intake and the incidence of erosive tooth wear in a group of healthy children and children with disabilities.
## 2.1. Study Design
This observational cross-sectional study was conducted between June and October 2019 among children of patients of a private dental practice in Kraków contracted by the National Health Fund for orthodontic treatment. A total of 101 questionnaires were collected, of which, after applying an exclusion criterion and verifying the completeness of the collected data, responses concerning 86 children were included in the evaluation, 44 healthy children and 42 children with disabilities, mainly Down syndrome ($73.8\%$) and single cases of the following chronic conditions: childhood cerebral palsy, retinoblastoma, deletion syndrome, vertebrae damage, psychomotor retardation, body asymmetry, and motor aphasia. Inclusion criteria for this study: age 6–17 years and not taking medication affecting saliva secretion (inhaled medication used for bronchial asthma). Exclusion criteria for this study included: lack of parental consent, as well as lack of patient/child cooperation during the dental assessment.
All participants were informed about the conditions and procedure of this study and gave written consent to participate in the study. This study was conducted in accordance with the Declaration of Helsinki for medical research and received approval from the Bioethics Committee of Jagiellonian University (no. 1072.6120.138.2019 of 27 June 2019).
## 2.2. Data Collection
Parents/legal guardians of children were asked to answer a survey questionnaire related to dental treatment before their child entered the dental practice. In this study, no power calculation to estimate sample size was conducted. Dental observation of the occurrence and severity of erosive tooth wear and dry mouth was carried out in the case of children with disabilities by the orthodontics specialist Elżbieta Radwańska and in the group of healthy children by the dentist Barbara Noga. The calibration was not performed.
During the oral review, the children’s prevalence and severity of erosive tooth wear were assessed by noting the highest BEWE value for each sextant. On this basis, the child was categorised into a risk group based on the severity of erosive tooth wear and defined as 0–2—no risk (grade 1), 3–8—low risk (grade 2), 9–13—moderate risk (grade 3), and ≥14—high risk (grade 4) [14].
A mirror test was also performed to assess the presence of dry mouth. This index is based on a 3-point scale in the following categories: I no resistance (the mirror slides freely over the mucosa), II slight resistance (slight resistance is felt when moving the mirror), III significant resistance (the mirror sticks to the mucosa) [15].
To assess dietary behaviour, the authors used selected questions from the questionnaire on the frequency of consumption of specific products and liquids. These questions were modelled after the KomPAN Questionnaire developed by the Team of Behavioural Determinants of Nutrition, Committee on Human Nutrition Science, Polish Academy of Sciences (PAN) [16].
The questionnaire also included questions about selected socio-economic characteristics of the respondents, specific hygiene behaviours related to oral health maintenance, e.g., frequency of tooth brushing, and information about the general health of children, including subjective feelings of dry mouth. Parents/legal guardians were asked about the presence of medical conditions such as diabetes, asthma, Sjögren’s syndrome, xerostomia, inflammation of the salivary glands, and other conditions that increase the risk of erosive tooth wear, and whether the children were on continuous or regular medication (at least three times a week) and taking selected dietary supplements.
Parents were also asked to provide their child’s current height and weight, from which a body mass index (BMI, kg/m2) was calculated to assess the children’s nutritional status. The BMI values of each subject were related to national centile grids for age and sex, taking into account WHO criteria [17].
## 2.3. Statistical Analysis
Statistical analyses were performed using Statistica 13.0 PL. Due to the nature of the collected data, the evaluation of differences in responses tested with the χ2 test and the Mann-Whitney-U test as a non-parametric equivalent of the Student’s t-test was used. In the description of the results, group A denotes healthy children, while group B denotes children with disabilities.
Differences in respondents’ answers were checked for the presence of disability, age groups, dryness of the mouth (no dryness—level 1, presence of dryness—levels 2 and 3 in the classification of the mirror test), and for the risk of dental erosion according to the adopted interpretation of the BEWE index (group 1—no risk, group 2—low, moderate and high risk). The level of statistical significance was set at $p \leq 0.05.$
## 3.1. Characteristics of Participants
The mean age of all children studied was 10.78 ± 2.96 years. There were no differences in the age of the respondents in the distinguished groups of healthy children and children with disabilities (Table 1). In the case of mothers of healthy children, only $18.2\%$ reported not working, while $64.3\%$ of mothers of children with disabilities did not work. In the study group of children with special needs, $35.7\%$ lived in rural areas, while in the group of healthy children, significantly fewer rural residents ($11.4\%$) were treated at a dental clinic ($$p \leq 0.0075$$).
None of the examined children had the following diseases associated with an increased risk of erosive tooth wear, i.e., diabetes, peptic ulcer disease, bronchial asthma, Sjögren’s syndrome, xerostomia, or inflammation of the salivary glands. On the other hand, in the group of children with disabilities, parents reported the occurrence of gastroesophageal reflux disease and eating disorders in children in single cases.
Statistically ($$p \leq 0.0002$$), significantly more ($45\%$) parents of children with disabilities confirmed that their child was taking medications on a regular basis compared to healthy children ($9\%$).
There was no statistically significant difference in the parents’ answers regarding dietary supplements taken by the child. $40\%$ of parents gave their children supplements containing vitamin C, while $7\%$ provided preparations containing iron. As many as $71\%$ of all surveyed parents reported that their child received other supplements.
## 3.2. Prevalence and Severity of Erosive Tooth Wear and Dry Mouth in Dental Assessment
According to the BEWE classification of non-carious erosive cavities, $26\%$ of the total number of children in this study had erosive tooth wear. A statistically significant difference was observed for the cumulative value of the BEWE index between the evaluated groups ($$p \leq 0.0003$$). In the group of healthy children, the mean value of the BEWE index was 1.39 (min = 0, max = 8, SD = 2.16), while the mean value of BEWE was higher for children with disabilities, amounting to 2.60 (min = 0, max = 7, SD = 1.98). There were no differences in the occurrence of erosive tooth wear depending on the sex, age, and BMI of the child.
The risk of erosive tooth wear was non-significantly more common in children with disabilities ($31.0\%$) than in healthy children ($20.5\%$). Erosive tooth wear was significantly more common ($$p \leq 0.02$$) in children whose parents declared the presence of eating disorders.
In both groups, as interpreted by BEWE, the lesions were of low severity, and therefore the risk of erosive tooth wear in the study group was low. The severity of the changes in the occurrence of erosive tooth wear in the study groups is shown in Figure 1.
For almost all healthy children ($97.7\%$), no resistance was found when the dental mirror was moved along the cheek surface, i.e., they were properly hydrated. In contrast, for more than half of the children with disabilities ($57.1\%$), slight resistance was found during the examination (Figure 2). This result was statistically significant ($p \leq 0.0001$). The survey questionnaire asked about the subjective feeling of dryness in the mouth. There was no statistically significant difference in the group of healthy and disabled children. Only $12\%$ of parents of all the examined children reported dry mouth.
## 3.3. Oral Hygiene Behaviour in the Study Group of Children
In maintaining oral hygiene in the group of children with disabilities, parents used an electric toothbrush significantly more often, while healthy children used dental floss ($$p \leq 0.0032$$) and chewed sugarless gum after meals ($$p \leq 0.0373$$) significantly more often compared to children with disabilities (Table 2).
There are also significant differences regarding the frequency of children’s visits to the dental practice. For healthy children, $61.4\%$ visit the dental practice every six months, while $42.9\%$ of children with disabilities visit more often than every six months for dental check-ups ($$p \leq 0.0057$$).
## 3.4. Qualitative and Quantitative Fluid Intake in a Group of Healthy and Disabled Children
The frequency of consumption of specific beverages in the groups of healthy and disabled children is shown in Table 3, and the results indicate a significantly higher frequency of consumption of flavoured waters or waters with juice syrup and fruit tea in the group of children with disabilities compared to the control group. No differences were observed in the quantitative consumption of specific liquids and total fluid intake (TFI) in the study groups of children (Table 4).
## 3.5. Fluid Intake in a Study Group of Children and the Incidence of Erosive Tooth Wear
It was confirmed that erosive tooth wear changes were significantly more frequent in children consuming more sweetened carbonated and non-carbonated drinks and black tea, as well as drinking more liquids per day (Table 5). Also close to the accepted limit of statistical significance were flavoured waters or waters with added syrup/juice.
## 4. Discussion
In this study, the incidence of erosive tooth wear was $26\%$ of the total number of children, and these were mostly lesions of minor severity. The exclusion criteria for the study group of children comprised the use of inhaled bronchial asthma medications. The cumulative assessment of the BEWE index showed a significant difference between the groups of children according to the presence of a disability, while in the interpretation of the BEWE, the risk of erosive tooth wear in a study group was non-significantly more frequent for children with disabilities (mainly with Down Syndrom) than for the healthy ones.
Similar results were obtained in a study conducted in Dubai in 2019, but dental erosion was significantly higher in children with Down Syndrome compared to healthy children ($34\%$ vs. $15.3\%$) [18].
Among the group of children with disabilities, as many as $57.1\%$ showed slight resistance when moving a mirror in the mouth. This may be related to the amount of fluid consumed and the effect of medication, which, however, was not investigated in this study. A dry mouth can be one of the symptoms of dehydration. In the study group, besides the effect of frequency, the amount of fluid intake was also evaluated. Different studies show that children and adolescents in Europe do not drink enough water [19,20]. Decreased salivary flow causes a decrease in clearance rate, leading to an increase in the risk of erosive tooth wear, especially in the case of physical activity [21].
In the survey, we confirmed that erosive tooth wear was significantly more frequent in children consuming more sweetened carbonated and non-carbonated drinks and black tea, as well as drinking more fluid per day. Also close to the accepted limit of statistical significance were flavoured or syrup/juice-infused waters, the amount of consumption of which may influence the development of erosive tooth wear, which is related to the low pH of these drinks (pH < 4.5). A higher frequency of consumption of flavoured waters or waters with juice syrup and fruit tea was observed in the group of children with disabilities.
The findings of the present study are in accordance with the results of a systematic review, where carbonated drinks were significantly positively associated with dental erosion in adolescents [8].
Also, a positive correlation was observed between the erosive lesions of the anterior teeth and the frequency of consumption of carbonated and energy drinks in the population of adolescents aged 15 in Poland [12].
However, in the population of 18-year-old young adults in Poland, drinking behaviour, like frequent consumption of fruit teas and energizing beverages, was connected with dental erosion. Also, hygienic habits, medical conditions such as asthma, eating disorders, and oesophageal reflux showed statistical significance associated with erosive tooth wear [13].
Children and adolescents from Poland make mistakes regarding the frequency of beverage consumption. The vast majority of schoolchildren from Kraków and the surrounding area indicate that they consume water daily about three times a day, but more than a third of them choose flavoured water or water with added juices/syrups [22].
A national study by Jessa J. and Hozyasz K. [23] indicated that children aged 6 months to 18 years hospitalised in Warsaw in 2016 at the Department of Paediatrics of the Mother and Child Institute were significantly more likely to drink flavoured waters.
In contrast, in a group of adolescents from the region of Podkarpacie (Poland), sugar beverages (soft drinks) were consumed most frequently, and respondents chose energy drinks more often than isotonic beverages. All the beverages indicated have an adverse effect on the development of erosive tooth wear [24].
Similarly, as in the presented study of a group of children from the Małopolska region, a study by Alves et al. showed an association of dental erosion with the consumption of soft drinks (including sweet carbonated and non-carbonated drinks), but also fruit juices [25].
In a cross-sectional study on a sample of 400 children from Valencia (Spain), a positive correlation was observed between the presence of tooth erosion and frequent consumption of fruit juices, fizzy drinks, and isotonic drinks ($p \leq 0.05$), showing a higher correlation if the liquid was held in the mouth before swallowing [26].
The study among adolescents in Stockholm County [27] diagnosed that erosive lesions were significantly correlated with soft drink consumption, the use of juice or sport drinks as a thirst quencher after exercise, and tooth hypersensitivity when eating and drinking.
The presented studies lack uniform nomenclature of individual types of beverages, and they do not specify the type of fruit juice, which makes it difficult to compare the results.
It is recognized that beverages with high calcium content, like milk or calcium-enriched juices, may reduce the risk of dental erosion. Therefore, adequate consumption of milk and dairy products is important in the prevention of dental erosion [8].
In the study by Guelinckx et al. [ 28], data from 3611 children and 8109 adolescents were retrieved from 13 countries, including Poland. In the total sample, the highest mean intakes were observed for water (738 ± 567 mL/day), followed by milk (212 ± 209 mL/day), regular soft beverages (RSB) (168 ± 290 mL/day), and juices (128 ± 228 mL/day). Large contributions of hot beverages, like black or fruit tea, to total fluid intake (TFI) were reported in the total children sample of Poland, which is culturally conditioned.
In the study group of children from the Małopolska region, the amount of milk consumption was similar (median 200 mL/day).
In a study by Hasselkvist et al., the development of erosive tooth wear was influenced by lesser sour milk intake and more frequent intake of drinks between meals [29].
Besides the consumption of acidic drinks, a lifestyle that may be conducive to such consumption, such as sedentary living, excessive screen viewing activities, as well as being overweight, may contribute to the development of erosive wear [8]. Numerous studies showed a positive correlation between the frequency and quantity consumption of sugar-sweetened beverages and body mass index in children and adolescents [30].
In this study of children from the Małopolska region, there was no statistically significant difference in the occurrence of erosive tooth wear in relation to the sex, age, and BMI of the child. However, the declared amount of acidic liquid consumption was associated with the occurrence of erosive tooth wear.
In study by Tschammler et al., a total of 223 children aged 4–17 years children with obesity and extreme obesity compared to children with normal weight had significantly higher erosive wear and caries of deciduous and permanent teeth [31].
People with intellectual disability (ID) are characterised by a high prevalence of incorrect eating patterns, as well as a high risk of becoming overweight or obese. The results of this study from Poland showed that excess body weight was observed in $66.7\%$ and obesity in $38.9\%$ of the respondents (seven subjects) with ID [32].
Due to the lack of *Polish data* regarding the quality of fluid consumption of children and adolescents with ID, it is impossible to confront the results of this study with the findings of other Polish authors.
Oral hygiene is also an important protective factor in the prevention of erosive cavities. Children with disabilities have difficulty maintaining proper oral hygiene. In maintaining oral hygiene in the surveyed group of children with disabilities, parents significantly more often use an electric toothbrush, while healthy children significantly more often use dental floss and chew sugar-free gum after meals compared to children with disabilities. The surveyed group of parents of children with disabilities is aware of the importance of oral health in relation to the health of their child due to the declared high frequency of visiting the dental clinic with their child. Almost half of the parents/guardians of the studied children had a university degree.
Dental treatment of disabled people in *Poland is* provided free of charge. In addition to the services guaranteed in the Polish system of health care for disabled people, they have access to treatment with the best materials and treatment methods. People with disabilities are reimbursed by the state for treatment under general anesthesia [33].
The data obtained from parents/guardians of disabled and/or chronically ill children living in Poznań and Białystok (Poland) showed that up to $18.5\%$ of children with disabilities had never been to a dentist. The most common reasons for a dental visit were changes within a tooth noticed by a parent ($25.5\%$) or a dental check-up ($25\%$). Only $67.5\%$ of respondents reported no access barriers to dental treatment [34].
## Strengths and Limitations of this Study
In the presented study, the limitation of the interpretation of the results may be due to the specificity of the collection of material and the small size of the study group. Other limitations are the lack of a power calculation to estimate the sample size and also the lack of results regarding the assessment of the prevalence of caries and oral hygiene. Among the strengths of the study, it should be emphasised that few studies provide a quantitative assessment of the fluids drunk by a group of children with special needs.
## 5. Conclusions
The group of children studied presents inappropriate drinking behaviours regarding the quality of the beverages consumed, which, especially in a group of children with disabilities, may contribute to the formation of erosive cavities. Disabled children cannot perform hygiene procedures and make decisions in the field of eating habits on their own. Therefore, parental education on the relationship of food and fluid intake to oral hygiene and general health in the future should be increased. |
# Social Support: The Effect on Nocturnal Blood Pressure Dipping
## Abstract
Social support has long been associated with cardiovascular disease risk assessed with blood pressure (BP). BP exhibits a circadian rhythm in which BP should dip between 10 and $15\%$ overnight. Blunted nocturnal dipping (non-dipping) is a predictor of cardiovascular morbidity and mortality independent of clinical BP and is a better predictor of cardiovascular disease risk than either daytime or nighttime BP. However, it is often examined in hypertensive individuals and less often in normotensive individuals. Those under age 50 are at increased risk for having lower social support. This study examined social support and nocturnal dipping in normotensive individuals under age 50 using ambulatory blood pressure monitoring (ABP). ABP was collected in 179 participants throughout a 24-h period. Participants completed the Interpersonal Support Evaluation List, which assesses perceived levels of social support in one’s network. Participants with low levels of social support demonstrated blunted dipping. This effect was moderated by sex, with women showing greater benefit from their social support. These findings demonstrate the impact social support can have on cardiovascular health, exhibited through blunted dipping, and are particularly important as the study was conducted in normotensive individuals who are less likely to have high levels of social support.
## 1. Introduction
Social support can be defined as the perceived availability of resources [1] (functional support). Research has shown a strong association between social support and mortality and morbidity [2,3,4,5], including cardiometabolic diseases such as cardiovascular disease (CVD), type 1 and type 2 diabetes, and chronic obstructive disease, which are the global leading causes of death [6,7,8,9,10]. Lower levels of social support have been associated with higher incidence and progression of colorectal cancer in men, higher recurrence of breast cancer in women, worse outcomes in older adults with lung cancer, worse outcomes for single individuals with gastric cancer [11,12,13,14], and worse diabetes outcomes [15]. Social support has also been linked to psychological factors such as depression, distress, and satisfaction with life, which have influences on cardiovascular health [16,17]. A review by Holt-Lunstad, Smith & Layton [18] found that the link between supportive relationships and health was as predictive of disease as known risk factors such as smoking and lack of physical exercise. Additionally, individuals with poor social connectedness are $29\%$ more likely to develop CVD and $32\%$ more at risk for stroke [19].
Hypertension is the most common disease in industrialized nations [20] and is the predominant risk factor for CVD [21]. In 2020, more than 670,000 deaths in the U.S. had hypertension as a primary or contributing cause [20], and nearly half of adults in the U.S. ($47\%$) have hypertension, as defined by the American Heart Association [22]. In adults who have not been diagnosed with CVD, there is a strong association of slightly elevated levels of both systolic blood pressure (SBP) and diastolic blood pressure (DBP) with increased risk for developing hypertension in a relatively short time [23] and is associated with early target organ damage [24,25,26]. This is important, as blood pressure (BP) shows an increased trajectory over time and is associated with poor cardiovascular outcomes even 25 years later [27]. Thus, several meta-analyses have shown the effectiveness of lowering BP to reduce CVD risk [28,29].
BP shows a circadian rhythm, such that a healthy cardiovascular profile includes a decrease of 10–$20\%$ from day to night (i.e., nocturnal dipping) [30]. Blunted nocturnal dipping (non-dipping) is defined as blood pressure which does not dip at least $10\%$. It is associated with increased risk for cardiovascular events in both normotensive and hypertensive adults, higher risk of cardiovascular morbidity and mortality [31,32,33], composite kidney endpoint, and increased risk of all-cause mortality [34]. Recent research has shown blunted nocturnal dipping to be a better predictor of cardiovascular disease and mortality than 24-h averages alone [35,36]. Indeed, abnormalities of the circadian dipping patterns are associated with both total and cardiovascular mortality [36].
Ambulatory blood pressure (ABP) has been demonstrated to be a better predictor of mortality than blood pressure taken in an office setting, typically a physician’s office (clinical BP) [37,38,39]. ABP allows for a more accurate assessment of BP, as it takes multiple readings spread throughout intervals across the day and night and can thus provide a more accurate evaluation of BP fluctuations, rather than a single reading in a physician’s office, which may be influenced by the white-coat effect [38]. However, while hypertension is the primary predictor of CVD and has been consistently linked to social support which has been assessed with ABP [40,41,42,43], a recent meta-analysis by Uchino and colleagues [44] found no association between daytime ABP and social support. This is an interesting finding given the amount of literature detailing the association between the two. For nocturnal dipping research, these are important findings to take into consideration, as nocturnal dipping is calculated using daytime and nighttime ABP. It would be worth considering that social support may differentially impact ABP when using the ratio of daytime and nighttime dipping ABP rather than just daytime ABP. Uchino and colleagues suggested that an examination of other indices of ABP such as nocturnal dipping would be beneficial, as such studies were not included in the meta-analysis. This could be informative given that nocturnal dipping as assessed with ABP has been linked to social support, such that those with lower levels of social support/social integration showed blunted dipping [45,46,47].
However, much of the research on nocturnal dipping has focused more heavily on hypertensive individuals and older adults, or it has not differentiated between younger and older adults. Yet adults under the age of 50 tend to report a lack of social connections or more loneliness than those over the age of 50 [48,49]. In fact, younger generations (Gen Z and Millennials) report a lack of social support and fewer social interactions than baby boomers [49]. Further, the literature on dipping has varied, with some studies showing social support impacting dipping on both SBP and DBP, while other studies show dipping on only one or the other (either SBP or DBP), or with no significant effects on either. A 2013 meta-analysis by Fortmann and Gallo [50] showed that of the studies that used the Interpersonal Support Evaluation List (ISEL) measure to assess social support, only one study showed social support associated with both SBP and DBP dipping [51], one study associated social support with only SBP dipping [52], one study found no results for either SBP or DBP dipping [53], and one study [54] found a marginally significant association between social support and SBP dipping. Thus, one aim of our study was to examine the discrepancies between findings.
Social support can be broadly assessed by the ISEL, measuring perceptions of functional support including tangible, emotional, and informational support, and feelings of belonging. However, none of the studies noted above from Fortmann and Gallo’s meta-analysis examined the specific domains of the ISEL. Because different domains of social support can be more beneficial, depending on one’s needs, these specific types should be examined individually. Additionally, a significant portion of the literature has focused on hypertensive individuals, yet research shows that an increased blood pressure trajectory over time is associated with poor health outcomes even 25 years later [27]. Thus, it would be beneficial to understand the impact of social support on nocturnal dipping in a normotensive sample under 50 years of age, a point in life where individuals could make social changes that could decrease the risk of developing hypertension later in life.
In an effort to better understand this impact, we collected ABP on 179 normotensive individuals under 50 years of age over a 24-h period, and data on their social support. Because social support has been associated with stress, and stress can influence BP, we also looked at the association of stress on dipping. Additionally, based on recent work showing the prognostic value of nocturnal dipping at predicting cardiovascular disease over the prognostic value of 24-h blood pressure readings [35,36], and on the recent work on the association of social support on daytime BP [44], we looked at the impact of social support on daytime, nighttime, and 24-h ABP. Finally, we examined the effect of social support on nocturnal dipping using the full ISEL measure. Because social support can be seen in different facets as measured in the ISEL, we also examined the specific dimensions of social support. Additionally, we expected that this association would be moderated by sex based on the literature that has identified sex as an independent predictor of daytime and nocturnal BP and nocturnal BP dipping [55].
## 2.1. Criteria
Participants having the following conditions were excluded: medical conditions/medications with a cardiovascular component (e.g., hypertension or psychological problems for which they were being medically treated; see Cacioppo, Malarkey [56]) and a self-reported body mass index (BMI) no higher than 29.9, as 30 or higher is classified as obese, and hypertension and obesity are highly correlated. Participants were required to have a smartphone in order to complete a diary reading (see Measures below), at each BP reading. Each participant was given a personalized access code to the diary website.
## 2.2. Participants
In total, 179 participants (male, $$n = 91$$, $55\%$; female, $$n = 88$$, $45\%$) were recruited through a university, social media, and the community. All participants were over 21 and under 50 years of age, married, and currently living with their spouse. The mean age of participants was 24.85 years (SD = 4.10, range 21–46), and average length of their marriage was 2.99 years (SD = 2.04; range 1–18). Most were White ($91.53\%$) and college educated ($46.89\%$ with college degree or higher; $51.98\%$ currently pursuing a college degree), with $46.88\%$ reporting an income over USD 30,000 (See Table 1).
## 2.3. Procedure
Following informed consent, eligible participants completed questionnaires related to perceptions of social support. Participants were then fitted with an ABP monitor and given detailed instructions on its use, shown how to stop a reading if needed (e.g., while driving, in a work meeting, etc.) and how to stop all readings if they chose to end the study early. Monitors were set to take a reading randomly twice an hour throughout the day and once per hour overnight. Participants were also given instructions on completing the diary entry (see Measures section below) and instructed to complete the entry within three to five minutes after the ABP monitor took a reading; diary entries were not required overnight. Participants returned the equipment the following day and received compensation. Participants were paid USD 75 each in cash.
## 2.4.1. Physiological Measures
Ambulatory blood pressure was obtained using the Oscar 2 (Suntech Medical Instruments, Raleigh, NC, USA). The Oscar 2 was designed specifically for ABP assessment and has been validated for both SBP and DBP by international guidelines [57]. It utilizes codes that may signify problems with the estimation of ABP readings. Based on prior research [58], readings associated with weak Korotkoff sounds, measurement timeout, and air leaks were deleted. Outliers associated with artifactual readings identified using criteria by Marler, Jacob [59] were also discarded; these included: (a) SBP <70 mmHg or >250 mmHg, (b) DBP <45 mmHg or >150 mmHg, and (c) SBP/DBP < [1.065+ (0.00125 × DBP)] or >3.0.
## 2.4.2. Psychological and Relationship Measures
The Perceived Stress Scale (PSS). The PSS is a ten-item assessment to measure stress perceptions and predict health-related outcomes associated with stress appraisal. This widely used assessment has been shown to have adequate psychometric properties and is related to other stress, health, and satisfaction measures [60]. Good reliability was demonstrated for the current study at 0.86.
Diary. Each participant completed a diary entry on their smartphone for each BP reading during the day. Piloting showed an entry took less than 2 min to complete. The diary collected information on standard control BP variables, and participants were instructed to complete the diary within 5 min following the BP reading. A time/day stamp allowed us to verify the diary entry was completed on time. Readings which were not completed within the 5 min window were discarded.
Sleep Quality. Sleep was assessed using a single item measure the following morning in which participants rated their sleep the previous night compared to an ordinary night on a 1–7 scale (1 = extremely bad; 7 = extremely good).
Interpersonal Support Evaluation List (ISEL). The ISEL [61] assesses network-level functional social support, measuring specific domains of appraisal, self-esteem, belonging, and tangible support. The ISEL has shown an overall internal consistency of 0.83. Our study demonstrated good reliability at 0.78.
## 2.5. Statistical Methodology
Data were analyzed using SAS version 9.4. Descriptive statistics were computed to examine demographics and baseline SBP and DBP in addition to average daily SBP and DBP, sleeping SBP, ISEL, and PSS averages. Mixed model (MIXED PROC) regressions were used to analyze associations between social support and nocturnal blood pressure dipping. Three steps were followed in running the regression models. The first step was to determine which covariates were significant predictors of the dependent variable (nocturnal dipping) by using forward selection methods. The second step was to run regression models. One regression model had PSS as the dependent variable, controlling for significant covariates from step one (age, BMI, posture, consumption of foods or drinks, and activity since the prior reading). The second regression model used ISEL as the dependent variable controlling for the same significant covariates. The last step was to conduct multiple group analysis to investigate gender differences in the models. Statistical significance was set at $p \leq 0.05.$
Nocturnal dipping was calculated as the change from daytime to nighttime BP. It has been measured by some researchers using the night–day ratio, with dipping (>0.8 and <0.9), extreme dipping (≤0.8), non-dipping (>0.9 and ≤1.0), and reverse dipping (>1.0). Using these criteria, among our sample, $16.77\%$ would be classified as extreme dippers, $48.04\%$ as dippers, $24.58\%$ as non-dippers, and $10.61\%$ as inverted dippers. Thus, it was more heavily distributed among dippers and non-dippers, and extremes in either direction were reasonably equivalent. We therefore treated nocturnal dipping dichotomously (dippers classified according to a dipping ratio of BP night/day; dippers were ≤0.90 and non-dippers were >0.90) taking the average of the daytime BP and the average of the night-time BP readings (time from self-reported bedtime to self-reported rising).
## 3.1. Preliminary Analysis
The mean number of readings per participant was 36.98 (range 22–46) for the 24 h period. All outliers due to artifactual readings were discarded as noted above. Percentage of discarded readings per participant was $1.48\%$ ($M = 0.55$, range 1–6). The SBP baseline average was 122 (SD = 12.19), and DBP baseline was 71.9 (SD = 7.84). Daily SBP average was 135.4 (SD = 18.83), and daily DBP average was 77 (SD = 9.92). Sleeping SBP average was 119.67 (SD = 20.82), and sleeping DBP average was 60.47 (SD = 10.26). ISEL scores ranged from 15–45, with a mean score of 36.39 (SD = 5.59). The PSS average was 16.5 (range 1–33; SD = 6.74), and sleep quality average was 4.06 (range 1–7; SD = 1.29) (Table 2).
We found social support associated with stress, such that those with greater perceived social support demonstrated less stress (B = −0.69, SE = 0.01, t[8141] = −62.55, $p \leq 0.001$). We next examined whether stress was associated with SBP or DBP dipping. Stress was associated with both SBP and DBP dipping and was thus included in the model.
We then examined daytime blood pressure readings and social support. Consistent with the Uchino findings, neither daytime SBP (B = −0.08, SE = 0.08, t[651] = −0.89, $$p \leq 0.37$$) nor DBP ($B = 0.01$, SE = 0.05, t[849] = 0.14, $$p \leq 0.37$$) was associated with social support. We then looked at nighttime readings and social support. SBP ($B = 0.36$, SE = 0.18, t[183] = 1.98, $$p \leq 0.04$$) was associated with social support, but DBP was not ($B = 0.13$, SE = 0.10, t[223] = 1.34, $$p \leq 0.18$$). Neither 24 h SBP (B = −0.07, SE = 0.08, t[677] = −0.89, $$p \leq 0.37$$) nor 24 h DBP ($B = 0.00$, SE = 0.04, [865] = 0.09, $$p \leq 0.92$$) was associated with social support.
## 3.2. Primary Analysis
Age, BMI, sex, position at time of reading, caffeine consumption, activity level, and sleep quality were significant predictors and were added to the model. We ran our first analysis on the full ISEL measure, capturing all domains in one score. As expected, nocturnal dipping was associated with perceptions of social support, such that those reporting low levels of social support showed blunted DBP dipping (B = −0.41, SD = 0.06, t[3801]= −7.24, $p \leq 0.001$). SBP dipping was not associated with social support ($$p \leq 0.118$$). We then looked at each domain separately to parse out the effect. Self-esteem was associated with both SBP dipping and DBP dipping (B = −0.64, SE = 0.09, t[3824] = −6.59, $p \leq 0.001$; B = −0.79, SE = 0.14, t[3801] = −5.74, $p \leq 0.001$), such that lower self-esteem support was associated with blunted dipping. Tangible support was associated with both SBP (B = −0.39, SE = 0.06, t[3824] = −6.20, $p \leq 0.001$) and DBP dipping (B = −1.75, SE = 0.19, t[3801] = −9.31, $p \leq 001$), such that lower tangible support was associated with blunted dipping. Belonging was associated with SBP ($B = 0.36$, SE = 0.05, t[3824] = 6.68, $p \leq 0.001$) and DBP ($B = 0.38$, SE = 0.11, t[3809] = 3.27, $$p \leq 0.001$$), such that less belonging support was associated with blunted dipping. Appraisal support was associated with blunted DBP (B = −1.68, SE = 0.19, t[3809] = −8.97, $p \leq 0.001$, but not with SBP (B = −0.07, SE = 0.05, t[3824] = −1.4, $$p \leq 0.16$$) (Table 3).
## 3.3. Effect of Gender
We examined whether the effects of social support on nocturnal dipping varied by sex. We found sex significantly interacted with dipping, such that women benefited more from total social support for SBP dipping (B = −0.298, SE = 0.034, t[3823] = −8.68, $p \leq 0.001$), but not DBP ($B = 0.004$, SE = 0.05, t[3800], $$p \leq 0.94$$). Looking at specific domains, women benefited more than men from tangible support for SBP (B = −0.55, SE = 0.097, t[3823] = −5.64, $p \leq 0.001$), but neither benefited for DBP (B−34, SE= 0.18, t[3800] = −1.86), $$p \leq 0.06$$). Women benefited more from belonging support for SBP (B = −0.64, SE = 0.12, t[3823] = −5.16, $p \leq 0.001$), but men benefited more for DBP ($B = 0.88$, SE = 0.13, t[3813] = 6.59, $p \leq 0.001$). Neither sex benefited from self-esteem support. Women benefited more than men from appraisal support for SBP (B = −1.154, SE = 0.09, t[3823] = −12.50, $p \leq 0.001$), but neither benefited for DBP (B = −0.07, SE = 0.05, t[3808] = −1.4, $$p \leq 0.161$$) (Table 4).
## 4. Discussion
Our main findings show social support associated with dipping, such that those individuals who perceive they have less social support demonstrate blunted nocturnal dipping for DBP but not for SBP. When we examined support by specific domains, we found both SBP dipping and DBP dipping associated with social support within the specific domains of tangible and self-esteem support. Further, we extended the prior literature showing the association between social support and health by examining normotensive individuals under 50 years of age. It is important to note that while overall social support was not associated with blunted SBP dipping, overall social support was associated with DBP dipping, and DBP carries its own risks separate from SBP. Whereas high SBP readings indicate an increase in the risk for heart disease such as heart attacks, heart failure, kidney disease, and overall mortality, high DBP is linked to higher risk of abdominal aortic aneurysm. The American Heart Association notes that there is an emphasis on SBP, yet research has shown that each increase of 10 mmHg in DBP is associated with a $28\%$ risk of developing an abdominal aortic aneurysm [62]. It is therefore important to take both SBP and DBP into account when assessing risk.
While we found DBP dipping to be associated with social support, we also found the same results as Uchino and colleagues on daily blood pressure and social support, such that daytime SBP and DBP were not associated with social support. We also found that nighttime DBP was not associated with social support, nor did we find an association for either 24 h SBP or DBP, although nighttime SBP was associated with social support. This is interesting, as both daytime and nighttime ABP are used to determine nocturnal dipping. This seems to suggest that it is the combination of daytime and nighttime blood pressure, specifically using the ratio of daytime and nighttime blood pressure, that is more useful as a measure of cardiovascular disease risk than using daytime or nighttime measures alone. The lack of association between 24 h SBP or DBP and social support is also indicative of the benefits of using nocturnal dipping as a health outcome, rather than BP alone, whether 24 h, daytime, or nighttime.
Social support is multidimensional and can influence health through various pathways. We gained a better picture of the contributions of social support to nocturnal dipping when we divided social support into its component parts. Tangible support predicted both SBP and DBP dipping. It can be expected that tangible support is an aspect of social support that was associated with both SBP and DBP dipping. Tangible support is one specific support that individuals find most beneficial. It can include provisions of shelter, food, or financial help, and the perception of the availability of such assistance when needed can significantly reduce stress. Financial stress can be a particular source of distress and is related to poor psychological and physiological health for those with low levels of perceived tangible support, with a six- to seven-fold increased odds ratio for poor psychological well-being and psychosomatic symptoms [63]. While there is a large body of literature on the benefits of tangible support on health outcomes, there is little addressing tangible support and nocturnal dipping. Our findings address this gap and demonstrate that in addition to contributing to other health-related outcomes, tangible support impacts nocturnal dipping.
SBP dipping and DBP dipping were also associated with self-esteem social support. Self-esteem social support assessments include items such as “Most people I know think highly of me.” Such support can be beneficial in terms of feeling valued by others. Self-esteem support can increase one’s ability to ask for help when help is needed, as it may help decrease feelings of burdensomeness. Thus, being able to ask for needed support could be manifest in healthy nocturnal dipping, as shown in this study.
These findings indicate the importance of examining social support within the differing domains, as one kind of support may be more effective at reducing stress than another. This is not to say that of benefits generally, only these two aspects of support (tangible and self-esteem) are beneficial, and the other aspects are not. Rather, our findings indicate that normotensive individuals under the age of 50 may benefit more from these specific types of support than older normotensive or hypertensive individuals. This is important to our understanding of the link between social support and health, as the benefits of social support have been shown to be more effective if they are applicable to the needs of the individual. In other words, one who needs emotional support in a time of stress will not find informational support helpful. Support is most effective if tailored to the specific needs of the individual.
It is expected that women would benefit more from social support. Traditionally, women offer more social support to others and when facing stress, tend to give and receive more social support than males. Part of this phenomenon, as noted in the literature, may be because men are more likely to react to stress with a “fight or flight” orientation, while women tend to use the “tend-and-befriend” response, with ‘befriend’ referring to creating and maintaining social networks. Men have more diverse social ties, but also more negative interactions. Women’s networks tend to be more homophilic and homogeneous and family-based than men’s, which may make it easier for a woman to ask for help from someone similar to her. Further, women are more likely to offer support to their same-sex friends than men are to offer support to their same-sex friends, and men are less likely than women to seek or provide support. These differences in social network structure and function may lead to women having a larger network from which to draw social support, be more likely to ask for help, and thus more likely to benefit from their social support.
Women’s SBP dipping and DBP dipping were also associated with belonging and appraisal social support. Belonging assessments include items such as “If I decide one afternoon to go to a movie, I could easily find someone to go with me.” Such belonging support can be beneficial in terms of enhancing mood and feeling a sense of acceptance and belonging by others. This sense of acceptance and belonging can help individuals to cope with stressors more effectively and could help one to avoid certain stressors to begin with. Appraisal support is measured with items such as “*There is* someone I can turn to for advice about handling problems with my family.” This type of advice or guidance can be a particularly effective type of support for women as demonstrated in the healthier dipping profile seen in our participants.
The current study is a novel contribution to the literature, as we have examined social support and health in a relatively under-investigated health outcome, which is a known contributor to cardiovascular disease. Further, we examined those at greater risk of loneliness and lack of social support and have done so by looking at the overall ISEL measure and the individual domains of social support. Lastly, we have specifically focused on a normotensive sample, which is likely more representative of those in the population of individuals under 50 years of age.
## Limitations and Future Work
While these findings are important, certain limitations apply. Our sample was predominately White and educated, and all were heterosexual and married; thus, it is not clear how these findings would relate to unmarried individuals, people of color, or non-heterosexual individuals. High BP is also more common is non-Hispanic Black adults than in non-Hispanic White adults and is more common in African Americans than in other ethnic groups. It would then be important to look at nocturnal BP in a more diverse sample. We used BMI as a criterion for exclusion based on the large amount of prior research that has used this measure. BMI is also a quick way to determine qualification before participants are scheduled to come to the lab. However, the body adiposity index (or hip-to-waist ratio) is now generally the preferred method and may have more accurately assessed obesity. We did not use the body adiposity index, as it could not be assessed until the participant arrived at the lab. Decisions on eligibility needed to be assessed earlier in the screening process. Additionally, we did not assess whether participants had taken a nap during the day, which could impact sleep time/quality. Our study was also cross-sectional; thus, the social support needs of the individual at this specific point may have influenced their response to the individual types of support (e.g., tangible vs. emotional). We also measured nocturnal blood pressure over a single 24 h period. It would be beneficial to measure over several 24 h periods. Finally, it is important to note that while ABP is a valuable and well-validated tool for measuring daytime and nighttime BP and has consistently found associations between cardiovascular measures and social support, the recent meta-analysis on daytime ABP found no such connection [44]. Our findings are consistent with these findings, and yet we showed social support associated with total DBP dipping, and with both SBP and DBP dipping within the various component parts of the ISEL measure. It is therefore important that future research examine how the findings of this study and from current research fit into the overall literature on social support and cardiovascular health.
## 5. Conclusions
Despite these limitations, our study demonstrated the importance of social support for normotensive individuals who may be at greater risk for insufficient support, and the importance of examining social support more broadly. Further, our study demonstrated the importance of examining nocturnal blood pressure in addition to using daytime or overnight blood pressure to assess the benefits of social support on cardiovascular health. Future studies should also consider an examination of nocturnal dipping in a more diverse sample. |
# A Novel Combination of Sotorasib and Metformin Enhances Cytotoxicity and Apoptosis in KRAS-Mutated Non-Small Cell Lung Cancer Cell Lines through MAPK and P70S6K Inhibition
## Abstract
Novel inhibitors of KRAS with G12C mutation (sotorasib) have demonstrated short-lasting responses due to resistance mediated by the AKT-mTOR-P70S6K pathway. In this context, metformin is a promising candidate to break this resistance by inhibiting mTOR and P70S6K. Therefore, this project aimed to explore the effects of the combination of sotorasib and metformin on cytotoxicity, apoptosis, and the activity of the MAPK and mTOR pathways. We created dose–effect curves to determine the IC50 concentration of sotorasib, and IC10 of metformin in three lung cancer cell lines; A549 (KRAS G12S), H522 (wild-type KRAS), and H23 (KRAS G12C). Cellular cytotoxicity was evaluated by an MTT assay, apoptosis induction through flow cytometry, and MAPK and mTOR pathways were assessed by Western blot. Our results showed a sensitizing effect of metformin on sotorasib effect in cells with KRAS mutations and a slight sensitizing effect in cells without K-RAS mutations. Furthermore, we observed a synergic effect on cytotoxicity and apoptosis induction, as well as a notable inhibition of the MAPK and AKT-mTOR pathways after treatment with the combination, predominantly in KRAS-mutated cells (H23 and A549). The combination of metformin with sotorasib synergistically enhanced cytotoxicity and apoptosis induction in lung cancer cells, regardless of KRAS mutational status.
## 1. Introduction
KRAS mutations occur in up to $35\%$ of patients with non-small cell lung cancer (NSCLC) [1] and represent $50\%$ of oncogenic mutations in adenocarcinoma histology. Clinically, these genetic alterations are usually related to age over 65 years, smoking history, mutual exclusivity from alterations in the Epidermal Growth Factor Receptor (EGFR), and EML4-ALK translocations [2,3]. Furthermore, alterations in the KRAS oncogene have been considered predictors of poor response in chemotherapy-treated NSCLC patients harboring advanced, or metastatic, disease stages [4]. The biological importance of KRAS mutations is focused on impairing GTP hydrolization, keeping it aberrantly activated [5], which then results in constitutive activation of cell signaling pathways, such as mitogen-activated protein kinase (MAPK) and AKT-mTOR-P70S6K [6]. In NSCLC, these alterations occur mainly at codon 12 ($80\%$), mostly as a substitution of glycine by cysteine (G12C, $42\%$), but there are also reported interchanges of glycine for valine (G12V, $21\%$), glycine for aspartate (G12D, $17\%$), and glycine for alanine (G12A, $7\%$) [7]. Particularly, G12C mutation is relevant, as it binds to specific KRAS inhibitors, such as sotorasib and adragasib [8], then inhibiting the phosphorylation of p-ERK in cells with this mutation [6], correlating with important reductions in tumoral size [5,6], and even showing promising results in clinical trials [9,10] of lung, colorectal, pancreatic, and endometrial cancers [11]. However, although the antineoplastic effects of sotorasib have been clearly described, their short-lasting clinical responses have become its most important drawbacks [10,11,12]. Consequently, preclinical evidence has suggested that sotorasib efficacy in KRAS-mutated tumors may be affected by diverse off-target resistance mechanisms [13], among which the most relevant is MAPK pathway reactivation by AKT-mTOR-P70S6K signaling [14]. Thus, metformin represents a pharmacological alternative that may overcome this resistance mechanism, since this biguanide inhibits complex 1 of the mitochondrial respiratory chain, subsequently activating AMP-activated protein kinase (AMPK). This triggers the activity of intracellular intermediaries to inhibit mTORC1, finally decreasing proteinic synthesis in cancer cells through p70S6K inhibition [15]. Accordingly, diverse studies have evidenced the cytotoxic role of metformin as monotherapy, its capacity to promote apoptosis and inhibit the mTOR pathway, as well as the correlation of these findings with reduced tumoral sizes in murine models [16,17,18].
Additionally, the combination of metformin with tyrosine kinase inhibitors, like afatinib, synergistically increased cell cytotoxicity, induction of apoptosis, and inhibition of PI3K-AKT-mTOR pathway in A549 (KRAS G12S) cell lines, even if these cells lack of EGFR mutations. This further suggests a sensitizing effect on afatinib mediated by metformin [19], which was further supported by in vivo studies reporting that combining metformin with diverse other targeted therapies decreased tumoral size and inhibited mTOR signaling in mouse neoplasms derived from A549 cell line (KRAS G12S) [20].
As well, in vitro evidence has demonstrated that metformin also regulates MAPK pathway; for instance, Ko et al. [ 17] showed that increasing metformin concentrations exhibited a dose-dependent inhibition of p-MEK$\frac{1}{2}$ and p-ERK$\frac{1}{2}$ in A549 and H1975 cells. Comparably, Do et al. [ 16] identified that metformin inhibited p-Raf and p-ERK$\frac{1}{2}$ in a dose-dependent manner. Thus, the effects of this biguanide extend beyond mTOR pathway.
Furthermore, emerging clinical evidence supports the concomitant use of metformin with antineoplastic drugs; for example, Arrieta et al. [ 21] reported that the combination of metformin with tyrosine kinase inhibitors (TKIs) increased the overall and progression-free survival periods in patients with EGFR-mutated NSCLC. Similarly, a phase II clinical trial showed a significant increase in the progression-free survival (PFS) of NSCLC patients after combined treatment with metformin and paclitaxel, carboplatin, or bevacizumab [22]. These findings suggest that metformin may enhance the clinical effectivity of other antineoplastic agents [20,23,24,25].
Finally, the molecular consequences derived from combining metformin and sotorasib remain unexplored; therefore, this study aimed to analyze their effects on cell viability, apoptosis and the activity of MAPK and AKT-mTOR pathways in lung cancer cell lines harboring different KRAS mutational statuses.
## 2.1. Metformin Increases Sotorasib-Driven Cytotoxicity in KRAS-Mutated Lung Cancer Cell Lines
First, we found a greater decrease in cellular viability using the combination of metformin and sotorasib, compared to their corresponding monotherapies. Specifically, we found significant differences between the combination and sotorasib alone in KRAS-mutated cell lines H23 ($56.2\%$ vs. $44.6\%$; $$p \leq 0.0457$$; Figure 1A) and A549 ($31.6\%$ vs. $53.9\%$; $$p \leq 0.0223$$; Figure 1B). Differently, the wild-type KRAS cell line (H522) did not display statistical significance in this comparison ($57.4\%$ vs. $47.6\%$; Figure 1C). Moreover, the pharmacodynamic analysis reported synergy between sotorasib and metformin in all tested cell lines, including H23 (CI = 0.62450; Figure 1A), A549 (CI = 0.73647; Figure 1B), and H522 (CI = 0.91655; Figure 1C).
## 2.2. Increased Apoptosis Induction by the Addition of Metformin to Sotorasib, Regardless of KRAS Status
After, we measured membrane markers of apoptosis (annexin-V) or necrosis (7-AAD). Consequently, as shown in Figure 2, all cell lines exhibited increases in apoptosis induction driven by the combination, compared to controls, including H23 ($22.3\%$ vs. $70.27\%$ p ≤ 0.0001), A549 ($8.02\%$ vs. $80.99\%$ p ≤ 0.0001), and H522 cells ($1.6\%$ vs. $49.47\%$ p ≤ 0.0001). Particularly, sotorasib showed significant differences compared to controls in H23 ($24\%$ vs. $66.2\%$ p ≤ 0.0001) and A549 cells ($5.6\%$ vs. $71.7\%$ $$p \leq 0.0127$$). Differently, H522 was the only cell line showing notable differences between sotorasib and the combination ($64.5\%$ vs. $80.9\%$ $$p \leq 0.0217$$).
## 2.3. Combined Therapy Significantly Decreases MAPK Pathway Activity
After confirming that metformin and sotorasib concomitantly induced cell death, we assessed their biological impact on diverse intermediaries of MAPK pathway, such as KRAS, CRAF, BRAF, and ERK$\frac{1}{2.}$ As expected, KRAS expression was importantly reduced in H23 cells after treatment with sotorasib alone ($$p \leq 0.0103$$) or in concomitance with metformin ($$p \leq 0.0013$$). In A549 cells, p-CRAF was importantly inhibited by all treatments, while BRAF expression was only reduced in the combined group (p ≤ 0.01). Additionally, p-CRAF was inhibited by the combined treatment in H522 cells (p ≤ 0.05). Furthermore, p-ERK$\frac{1}{2}$ (p-MAPK) expression was decreased in H23 by all treatments, in A549 cells by metformin (p ≤ 0.01) and the combination (p ≤ 0.01), and in H522 by the combination (p ≤ 0.01) and metformin alone (p ≤ 0.01) (Figure 3).
## 2.4. Combined Treatment of Metformin and Sotorasib Inhibits AKT and P70S6K Activation
Next, we explored the inhibitory efficacy of the combination over the AKT-mTOR-P70S6K pathway, since this is the main resistance mechanism to KRAS inhibitors (Figure 4). Specifically, AKT expression was reduced after sotorasib alone (p ≤ 0.05) or the combination (p ≤ 0.01) in H522 cells. Moreover, p-AKT was significantly inhibited by the combination in H23 ($$p \leq 0.0163$$) and H522 cells (p ≤ 0.05), but only as a non-significant trend in A549 cells. Furthermore, p-P70S6K was significantly inhibited by the combination in H23 ($$p \leq 0.0071$$) and H522 cells (p ≤ 0.01), but only as a non-significant trend in A549 cells.
## 3. Discussion
Treatment with sotorasib has modified response and survival of patients with KRAS G12C mutations. However, despite showing promising responses, intrinsic or acquired resistance mechanisms have prevented the development of better clinical results. In this sense, the most important mechanism of resistance to sotorasib is the activation of the AKT-mTOR-P70S6K pathway. As metformin has previously been demonstrated to inhibit this signaling pathway, we explored whether combining this biguanide with sotorasib resulted in an improvement of sotorasib effectiveness in lung cancer cells. Consequently, our results exhibited that the combination exerted synergistic effects over cytotoxicity and apoptosis in cells with G12C and G12S KRAS mutations. The most similar example to this phenomenon in the literature is a study of our research group, showing that combining metformin and afatinib (EGFR tyrosine kinase inhibitor) induces a synergistic effect on A549 cells, even if this cell line lacks EGFR mutations. This effect was mainly attributable to metformin-driven AMPK activation, which then inhibited mTOR-P70S6K signaling [19]. Analogously, metformin also potentiates apoptosis in combination with selumetinib (MEK inhibitor) [26], implying that the inhibition of the MAPK pathway is important for metformin-driven apoptosis as part of a wide mosaic of other reported mechanisms, such as lowering of Bcl-2 protein levels, increasing Bax expression [27], and promoting G0/G1 cell cycle arrest [18]. Otherwise, sotorasib has also been combined with other drugs to overcome its resistance, like buparlisib (PI3K inhibitor) [28] or DT2216 (BCL-XL) [29], thereby supporting the assertion that the PI3K-AKT-mTOR pathway plays an important role in the apoptosis of KRAS-mutated cells [30].
After assessing the cytotoxic effects of our concomitant therapy, we explored its impact on MAPK and AKT-mTOR-P70S6K signaling pathways, showing an important inhibition of them in all cells tested, regardless of KRAS mutational status. This is relevant, since MAPK pathway inhibition is a well-known consequence of sotorasib monotherapy in models with G12C mutation [31], and it is equally expected that its high specificity to this alteration prevents sotorasib from inhibiting p-ERK in cells without G12C mutation [6]. Therefore, our results show, even at the proteomic level, an important sensitization of metformin to sotorasib effects in non-common KRAS mutations. Furthermore, mTOR inhibition has special importance in reaching effective cytotoxicity in cells with an over-activated MAPK pathway, as important cytotoxic effects in cell lines with KRAS or MEK mutations are reported from the use of mTOR inhibitors, whether alone [32,33] or in combination with MAPK inhibitors [34]. Finally, we found that metformin synergizes with sotorasib due to an important inhibition of AKT and P70S6K in all cells. These findings match with those results previously reported by our research group for combining metformin and afatinib in lung cancer cells, in which we described that this biguanide potentiates apoptosis induction by inhibiting the EGFR-AKT-P70S6K pathway [19]. Furthermore, previous studies have also reported that inhibiting PI3K-AKT-mTOR pathway positively correlated with apoptosis induction [32,33]. These findings are further consistent with preclinical evidence testing the concomitant use of metformin and figitumumab, showing inhibition of PI3K-AKT and MAPK signaling pathways [20], thus placing these drugs as potential enhancers of KRAS inhibitors, such as sotorasib. Differently, metformin has demonstrated variable outcomes over MAPK, as some studies stand that this biguanide increases B-RAF and C-RAF activity [35,36], while others report the exact contrary effect [16,17]. This phenomenon can be explained by differences in the concentrations used during in vitro tests; for instance, metformin IC50 concentrations > 20 mmol in A549 cells are reported to cause an active inhibition of the AKT-mTOR pathway, which decreases the inhibitory activity of Rheb over the dimerization of C-RAF and B-RAF [35], indirectly promoting MAPK activity. Meanwhile, lower concentrations of this biguanide (1–10 mmol) are not reported to inactivate Rheb, then allowing MAPK pathway inhibition, as evidenced in this study for A549 (CRAF and P-MAPK), and H522 cells (P-CRAF, P-MAPK, CRAF, and MAPK) after treatment with metformin, either as a monotherapy or in combination with sotorasib. Altogether, our results suggest that combining metformin and sotorasib finds its main mechanism of action in the concomitant inhibition of the AKT-mTOR-P70S6K pathway by metformin, and MAPK by sotorasib, thus simultaneously decreasing protein synthesis and cell growth. This mechanism of action is further illustrated in Figure 5.
Moreover, as part of the wide mosaic of intracellular effects of metformin, plenty of evidence demonstrates that this biguanide modifies diverse metabolic pathways to avoid the development of Warburg effect in cells with KRAS mutations. Although we did not evaluate metabolism in this study, we previously reported that combining metformin and afatinib showed strong inhibition of GLUTs, and a marked increase in AMPK activity, regardless LKB1 involvement [19]. This may be explained by AMPK-driven inhibition of energy generation [37]. Importantly, our study shows that cells lacking LKB1, such as A549, decreased MAPK and p-MAPK expressions, which may also promote metabolic consequences, such as decreased lactate levels and AMPK-mediated glycolysis. Therefore, the metabolic importance of metformin may be of special interest in cells with KRAS mutations, as this driver alteration is metabolically involved in cancer progression [38].
## Strengths and Limitations
The main strength of this study is exploring the combined effect of metformin and sotorasib in cells having or not KRAS G12C mutation of susceptibility to sotorasib, demonstrating a synergistic relationship between metformin and sotorasib for the first time. Nevertheless, we are aware of the limitations of this investigation; first, it only was used one cell line belonging to each of the most representative groups of mutational profiles (KRAS G12C, G12S, and non-KRAS mutated). Second, although our results in A549 cells are in line with previous reports, evidence is lacking for H23 and H522 cells, not allowing complete generalization of our results to studies involving these cell lines.
## 4.1. Cell Lines and Reagents
Human lung adenocarcinoma cell lines H23 (KRAS G12C), A549 (KRAS G12S), and H522 (without KRAS mutations) were purchased from the American Type Culture Collection (ATCC, Manassas, VA, USA). H522 and H23 cells were cultured in RPMI-1640 medium (Gibco, Waltham, MA, USA. 31800-022), meanwhile A549 cells were cultured in F12 medium (Gibco, Waltham, MA, USA. 21700-075), and both media were supplemented in a $10\%$ concentration with Fetal Bovine Serum (FBS) (Gibco, New York, NY, USA. 26140-079) and penicillin–streptomycin–amphotericin B in a $1\%$ concentration (MP Biomedicals. Fountain Pkwy, OH, USA, 091674049). They were incubated in an atmosphere of $5\%$ CO2 at 37 °C. As cells constituted an 80–$90\%$ confluent monolayer, they were subcultured using 400 µL of Trypsin-EDTA 1X solution (Sigma Aldrich. St. Louis, MO, USA. 549430C).
Metformin (Sigma Aldrich. St. Louis, MO, USA. PHR1084) was diluted in the appropriate culture medium of each cell line at a concentration of 100 mmol. Similarly, sotorasib (Medkoo Biosciences. Morrisville, NC, USA. 207085) was diluted in dimethyl sulfoxide (DMSO) at 5 µmol, 10 µmol, 15 µmol, 20 µmol, and 25 µmol concentrations.
## 4.2. Cell Viability Assay
A quantity of 1 × 104 cells per well were seeded in triplicate in 96-well plates. After 24 h of incubation, cells were treated for 72 h with metformin at different concentrations per well triplicate (5 mM, 10 mM, 15 mM, 20 mM, and 25 mM). In the same way, cells in three independent experiments were treated for 72 h with 5 µmol, 10 µmol, 15 µmol, 20 µmol and 25 µmol of sotorasib as monotherapy.
Subsequently, the MTT solution (3,4,5-dimethylthiazol-2-yl-2,5-didiphenyltetrazolium bromide) at a concentration of 5 mg/mL (Sigma Aldrich. St. Louis, MO, USA. Catalog number: M2128) was added to wells, which were incubated for 4 h. After this period, the culture medium was removed and replaced by 200 µL of isopropanol-DMSO (1:1) solution to dissolve the formazan crystals. Cell viability resulting from this experiment was quantified by measuring absorbance at 570 nm (BioTek, Saint Clare, CA, USA, ELX 808) to calculate optical density values.
The results of such measurements were averaged and normalized at $100\%$ in relation to controls. According to cytotoxicity results, we determined IC50 and IC10 doses of sotorasib and metformin, respectively, for each cell line, which are shown in Table 1.
Then, each cell line was seeded in 96-well plates in an amount of 1 × 104 cells per well, ordering them in five triplets of wells, representing the following treatment groups: control, DMSO, metformin, sotorasib and the combination of sotorasib and metformin. After 24 h of incubation, each cell line was treated with its respective IC10 and IC50 doses of metformin and sotorasib, respectively, either as monotherapy or as a combination. After that, the viability test was carried out using MTT solution and a spectrophotometer, as previously described.
## 4.3. Analysis of Drug Combination Index
To determine the type of pharmacodynamic interaction between metformin and sotorasib, we calculated their combination index (CI) for each cell line using Compusyn 1.0 software (Biosoft, Cambridge, UK). Combination index values <1 were interpreted as synergistic, values from 1 to 1.10 as additive, and values > 1.10 as antagonistic.
## 4.4. Apoptosis Assay
To assess the level of apoptosis induction, cell lines were seeded in 24-well plates in a confluence of 4 × 104 and incubated overnight. After 24 h, cells were incubated with IC10 and IC50 doses of metformin and sotorasib, respectively, either as monotherapies or as a combination for 72 h at 37 °C and $5\%$ CO2. Then, the cells were detached using trypsin, washed three times with 1X PBS, and later they were marked with FITC Annexin V Apoptosis Detection Kit with 7-AAD (Biolegend, San Diego, CA, USA 640922). Finally, the cells were evaluated through flow cytometry in accordance with manufacturer’s instructions.
## 4.5. Western Blot Analysis
After 72 h of treatment, cell lines were washed three times with PBS solution and lysed with RIPA lysis buffer system (Santa Cruz Biotechnology. Dallas, TX, USA. SC-24948) according to the manufacturer’s instructions. Subsequently, extracted proteins were quantified using Bradford’s assay (Bio-Rad, Hercules, CA, USA, #5000205). Then, 40 µg of total protein was separated by electrophoresis for 110 min at 100 V on $10\%$ SDS-PAGE gel, and then transferred onto 0.2 μmol nitrocellulose membranes by the Trans-Blot Turbo Transfer System set at 20 V and 2.5 amps. The efficacy of this process was checked by Ponceau Red stain. Subsequently, membranes were blocked with a $10\%$ BSA-PBS tween solution for 30 min, underwent three washes of 10 min with PBS-Tween solution, and were incubated with their corresponding primary antibodies (dilution 1:1000) overnight at 4 °C. Secondary antibodies were directed against the following molecules: KRAS (Santa Cruz Biotechnology. Dallas, TX, USA. SC-30), B-RAF (Cell signaling. Danvers, MA, USA. 9433), C-RAF (Cell signaling. Danvers, MA, USA. 53745), p-CRAF (Cell signaling. Danvers, MA, USA. 9421), MAPK (Cell signaling. Danvers, MA, USA. 9102), p-MAPK (Cell signaling. Danvers, MA, USA. 4370), AKT (Santa Cruz Biotechnology. Dallas, TX, USA. SC-5298), p-AKT (Santa Cruz Biotechnology. Dallas, TX, USA. SC-514032), P70S6K (Cell signaling. Danvers, MA, USA. 9202), p-P70S6K (Cell signaling. Danvers, MA, USA. 9205), and GAPDH (Santa Cruz Biotechnology. Dallas, TX, USA. SC-47724).
After incubation, each primary antibody was removed from its corresponding membrane. Later, each membrane underwent three washes of 15 min with PBS-Tween solution. Once completed, membranes underwent incubation for 1 h with a 1:10,000 dilution of their corresponding secondary antibodies. After that, membranes took 5 of 10 min to reduce background derived from secondary antibodies. Finally, proteins of interest were visualized using an enhanced chemiluminescence kit (LI-Cor, Lincoln, NE, USA), and band intensities were quantified by densitometry using ImageJ software (1.49 version, National Institutes of Health, Bethesda, MA, USA).
## 5. Conclusions
Metformin exhibits a sensitizing role to sotorasib in non-KRAS-mutated cells. Furthermore, the combination of sotorasib and metformin exerts a synergistic enhancement of cytotoxicity and apoptosis induction, likely driven by the concomitant inhibition of MAPK and mTOR-P70S6K pathways in KRAS-mutated cells. |
# Short-Term Ambient Air Ozone Exposure and Components of Metabolic Syndrome in a Cohort of Mexican Obese Adolescents
## Abstract
Ambient air pollution is a major global public health concern; little evidence exists about the effects of short-term exposure to ozone on components of metabolic syndrome in young obese adolescents. The inhalation of air pollutants, such as ozone, can participate in the development of oxidative stress, systemic inflammation, insulin resistance, endothelium dysfunction, and epigenetic modification. Metabolic alterations in blood in components of metabolic syndrome (MS) and short-term ambient air ozone exposure were determined and evaluated longitudinally in a cohort of 372 adolescents aged between 9 to 19 years old. We used longitudinal mixed-effects models to evaluate the association between ozone exposure and the risk of components of metabolic syndrome and its parameters separately, adjusted using important variables. We observed statistically significant associations between exposure to ozone in tertiles in different lag days and the parameters associated with MS, especially for triglycerides (20.20 mg/dL, $95\%$ CI: 9.5, 30.9), HDL cholesterol (−2.56 mg/dL ($95\%$ CI: −5.06, −0.05), and systolic blood pressure (1.10 mmHg, $95\%$ CI: 0.08, 2.2). This study supports the hypothesis that short-term ambient air exposure to ozone may increase the risk of some components of MS such as triglycerides, cholesterol, and blood pressure in the obese adolescent population.
## 1. Introduction
Ambient air pollution is a major public health concern globally. Over $90\%$ of the world’s population is estimated to live in zones where air pollutant concentrations exceed the World Health Organization guideline limits (WHO) [1]. In Mexico, ozone is found in concentrations higher than what is established as acceptable by the Official Mexican Standard, as estimated by the System of Atmospheric Monitoring of the Metropolitan Area of the Valley of Mexico (SIMAT) [2].
Several epidemiological studies have linked ambient air pollution with respiratory (chronic obstructive pulmonary disease, asthma, lung function decrease, and inflammatory airways) [3,4,5,6] and cardiovascular diseases [7,8,9] and lungs cancers [10,11]. These effects indicated that exposure to ambient air pollutants might cause events during the later stages of life and initiate chronic disease processes. However, the effects of air pollutants on the earlier stages of developing chronic diseases are less studied. Metabolic syndrome (MS) comprises a cluster of major modifiable risk factors for non-communicable diseases, including abdominal obesity, dyslipidemia, elevated blood pressure, and high glucose concentrations [12,13]. MS affects approximately 10–$25\%$ of the global population and its prevalence is rapidly increasing worldwide, and it has been suggested that the increase in the prevalence of MS is related with genetic factors, low physical activity, and an unhealthy lifestyle; however, the ambient air pollution could also be a risk factor for components of MS [14]. Under this context, the inhalation of air pollutants can participate in the development of oxidative stress, systemic inflammation [15,16], insulin resistance [17], endothelium dysfunction [18,19], and epigenetic modification. These negative responses can independently and/or interactively be involved with the development of cardiovascular symptoms, all of which are components in the diagnosis of MS. Previous epidemiological and experimental studies have explored the relationship between air pollution exposure and individual MS components [14,20,21]. However, the existing evidence focuses mainly on the adult population, one of the main reasons being the complexity of the diagnosis of metabolic syndrome in adolescents, so it becomes very important to study how exposures to environmental pollutants behave in metabolic disorders in this population group. Two previous epidemiological studies in humans investigated the relationship between air pollutants and MS, and both reported significant associations [14,22].
Additionally, a recent animal study showed that exposure to air pollutants (exposure to particulate matter) resulted in weight gain and cardiorespiratory and metabolic dysfunction [20]. More recently, studies in rats and humans reported that acute or short-term exposure to ozone, under controlled conditions, can lead to metabolic disturbances within hours or days since changes in the metabolome in blood samples were observed [21,23]. Even though these studies involving the metabolome have provided important information regarding new metabolites and the possible mechanisms of action of acute exposure to ozone, it is still necessary to evaluate the short-term effects on macromolecules derived from these metabolic processes more closely in population groups exposed to environmental fluctuations in ozone. Additionally, to our knowledge, no prior study has been conducted in Mexico to evaluate the association between short-term air pollution and MS in the adolescent population. Therefore, considering the current MS epidemic, the higher air pollution, and the scarcity of such an evaluation, our study would be the first to assess the relationships between short-term exposure to ozone and MS in Mexican adolescents.
## 2.1. Design and Study Population
A dynamic cohort study of 415 adolescents living in the *Metropolitan area* of Mexico City was conducted from January 2006–August 2013. Participants were enrolled during the first three years and followed during one year on average (from 6 months until three years maximum) from when they attended the obesity clinic at Mexican Children’s Hospital Federico Gómez (HIM-FG), which provides health care to people from 0 to 18 coming from the entire metropolitan area of the Mexico City.
Mexico *City is* part of the metropolitan zone in the Valley of Mexico (MZVM), with nine million inhabitant; approximately $52\%$ of the population are women, and $13.5\%$ belong to the 10–19 age group [24]. This is considered the largest and most complex city in the country with high levels of traffic-related pollutants emissions [25].
The main objective of the cohort was to evaluate if weight loss improved lung function and reduced local inflammation in obese adolescents aged between 10 to 18 years old with and without asthma. All adolescents who met the criteria for being overweight and obese according to Cole et al. [ 26] and agreed to participate in the study were included in a program of nutritional, physical, and psychological orientation to improve their “healthy” life. Adolescents were given recommendations to increase their physical activity for half an hour per day, and a nutritionist give orientation for a healthier diet based on the WHO recommendations according to age and sex ($60\%$ of carbohydrate, $20\%$ proteins, and $20\%$ fat).
All participating adolescents signed an informed consent letter in addition to the consent letter from both parents. The protocol was approved by the ethics committees of the Children’s Hospital of Mexico Federico Gómez and the National Institute of Public Health. The adolescents were cited for the first time for an evaluation, where they had a clinical history and blood samples were taken to evaluate the metabolic profile (cholesterol, triglycerides, high- and low-density lipoproteins, uric acid, glucose). Participants were cited every three months for taking blood samples and for receiving dietary guidelines and the questionnaires on the frequency of food consumption and physical activity were applied. For every 15 days during the first three months and every month during the following months up until one year, the child received psychological attention through trained personnel.
As part of this cohort, and preserving the longitudinal character of the base study, we selected a subsample of 372 adolescents aged 9 to 19 years, diagnosed with overweight and obesity, in which metabolic alterations in blood were evaluated every three months.
## 2.2.1. Components of Metabolic Syndrome Evaluation and Other Measures
To determine the biochemical parameters of metabolic syndrome, blood samples were taken by trained personnel, and a sample of approximately 7 mL was extracted from a vein of the participant’s arm and duly safeguarded to maintain its integrity. The sample was obtained during the first hours of the morning, asking the adolescent to come fasting and in optimal hydration conditions. The sample was centrifuged and separated into 2.5 mL vials, and then it was sent to freeze for storage and subsequent analysis. All the extractions were conducted according to the manufacturer’s instructions.
Blood pressure was taken through a baumanometer by trained personnel. The adolescent was left with 15 min of rest, and the shot was taken twice to obtain an average of the measurements and have a more accurate value. Information on anthropometric measures was obtained from participants at the baseline and during the follow-up period. Each participant was weighted while wearing light clothing and standing without shoes on a calibrated platform scale (brand healthometer, model 402 KL, with a minimum capacity of 100 g). The height (cm) was obtained using a Holtain Limited Crymych, Dyfec stadiometer barefoot on a flat surface, making a right angle with the vertical bar of the stadiometer and asking each patient to inhale before sliding the headboard over the top point of their head. The BMI (BMI = weight/height2) was calculated to indirectly quantify body fat, considering the following cut-off points: 20, 25, and 30, corresponding to the categories of normal weight, overweight, and obesity, according to Cole.
The cut-off points for the parameters related (triglycerides, HDL cholesterol, and fasting glucose) to the diagnosis of MS were those established by the FID [27]. A participant was considered as positive for MS if he had a waist circumference greater than the 90th percentile or the threshold or, failing that, the condition of overweight or obesity according to the body mass index, plus two criteria of the following: [1] triglycerides levels > 150 mg/dL, [2] HDL cholesterol levels < 40 mg/dL, [3] systolic blood pressure > 130 mmHg, [4] diastolic blood pressure > 85 mmHg, and [5] fasting glucose levels >to 100 mg/dL.
The parameters of the MS were managed in two forms; in the first, the values for each component were considered continuously and each one was handled as an individual variable. In the second, a joint variable (yes and no) was explored and constructed from the different parameters that make up the MS based on the definition of the FID [27].
## 2.2.2. Exposure Assessment
All the information related to the air pollutants, as well as the information of meteorological variables (direction and wind speed, humidity, and temperature), were obtained through the Atmospheric Monitoring System of the Metropolitan Zone of the Valley of Mexico (SIMAT), which makes measures continuously for 365 days of the year. Currently, in the MZVM, atmospheric monitoring networking (AMN) has 40 air quality monitors.
The daily exposures to ozone and the other pollutants were constructed considering two important times of the day, the shift attended at school (morning: 7:00 to 14:00 h and evening: 15:00 to 19 h), and the remaining hours were assigned to the corresponding exposure to the home address. According to the above, hourly averages were used to obtain a maximum of 24 h hours or a daily maximum (1-h maximum): a maximum of 8 h according to school or home address was estimated per adolescent once their exposure diary was constructed, and delays of 1 up to 15 days prior to the visit to the blood sample collection were recorded.
The exposure to O3 was assigned using a geographic information system (GIS), which considered the distance between the monitor and the area where both the home and school of the participants were located, based on their address and zip code, estimating the closest monitor’s exposure either to the school or home according to the school shift. Additionally, during the study period (specifically in 2010), the AMN made some changes both in the location of some monitors and in the placement or elimination of others; therefore, these were considered in the assigning of the closet monitor.
## 2.3.1. Diet
The dietary intake was assessed using a validated food frequency consumption questionnaire, which indicates how many times a week and how many servings of food the participant consumed per day. This report was intended to represent dietary consumption during the three months that elapse between one measurement and another. The questionnaire was applied by trained nutritionists and was answered by the adolescent, supported by the person who accompanied them to the consultation. To obtain the consumption of kilocalories consumed in one day, the portions of each of the foods that the participant consumed during a week were calculated to then obtain a daily average of carbohydrates, lipids, and proteins; the calculation of micronutrients consumed during a day (antioxidants: vitamin C and vitamin E) was conducted in the same way. The nutritional contribution of each of the foods was calculated based on the reference values of food composition established by the Tables for Practical Use of Foods of Greater Consumption in its third edition [28], as well as by what is established in the Mexican System of Equivalent Foods in its third edition.
## 2.3.2. Physical Activity
Physical activity was categorized as mild, moderate, and intense through a short physical activity questionnaire assessed according to each adolescent visit. The questionnaire consisted of 6 questions evaluating practice about physical activity in the last 7 days and the time it took: vigorous (running, swimming, riding a bike, or playing in some team), moderate (quick walk or jog of 20 min or more), or light (walking 20 min). Additionally, it explores the time spent remaining sitting in front of a television, a computer, or playing video games. The questionnaire was applied by trained personnel and was answered by the adolescent, supported by the person who accompanied them to the consultation.
## 2.4. Statistical Analysis
An exploratory analysis of the information was carried out, where each variable’s quality and distribution were evaluated. Added to this, the minimum and maximum ranges of the variables were analyzed to know if there was any extreme value that would affect the distribution, eliminating two low main criteria and entry to those that were not biologically plausible, and ensuring that these values do not represent more than $5\%$ of the total of the values within the variable. The short-term association of ozone exposure with the metabolites outcomes was evaluated using linear mixed-effects models, considering the ID of the participant as a random intercept, and using models for continuous and binary responses (only for the metabolic syndrome condition). We also evaluated as potential confounders the physical activity, BMI, antioxidant intake (vitamin C and vitamin E), asthma presence, kilocalories consumed, and meteorological variables, considering only those that were significant in the final model. Likewise, statistical significance was evaluated from the inclusion or exclusion of each variable to elucidate how the coefficient was affected by time, with the interest of finding the best estimate of the model with the least number of variables.
A mixed-effects model with random intercept was used since some variables had different measurements over time; however, there were some more that were maintained through the study. Within the mixed models, the short-term exposure was evaluated based on tertiles, leaving the lowest tertile as a reference category for the remaining two. The cut-off points for each tertile were different depending on the number of days before the sample was taken, within which the exposure (lags) was considered. Lag days from 0 to 15 were considered, according to that reported in the previous literature, in which multiple lags due to short-term ozone exposure are evaluated for cardio metabolic risk [22]. To prevent the results from being biased via exposure to other pollutants, especially particulate matter, which has been widely evaluated as associated with ischemic disease and cardiovascular risk, we adjusted the model by adjusting for PM2.5 concentrations of the same lag as for ozone using the maximum of 24 h. A stratified analysis was performed with the conditions of asthmatic or non-asthmatic; however, no statistically significant association was obtained. All the statistical analysis was carried out using the statistical package STATA 14.0.
## 3. Results
The mean age of participants was 12.8 years (SD = 2.1 years). More than half of the subjects were males ($56.5\%$). The characteristics of the study participants are summarized in Table 1. Based on the main definition of MS for the study, the prevalence of MS was $10.0\%$. In Table 2, we can see the results related to the parameters are indicative of MS. We found that $45\%$ of the participants were obese; in terms of fasting glucose levels, only $8\%$ of the participants presented values above what was considered normal, $40\%$ had HDL cholesterol levels below established limits, $35\%$ had triglyceride levels above the cut-off point of normality, only $3\%$ of the participants presented high blood pressure of both the systolic and the diastolic kind, and $9\%$ of participants met criteria for the classification of metabolic syndrome in the baseline data.
Table 3 summarizes the descriptive statistics of air pollution concentrations and meteorological variables based on the geographical area of the Metropolitan Area of the Valley of Mexico. We found that the ozone concentration is lower in the northern part and must increase concentration in the areas further south. This coincides with a slight increase in the average temperature for the central and southern areas, which, added to the wind and relative humidity conditions, means that the population living in these areas is exposed to slightly higher concentrations than the rest.
The association between ozone ambient air concentrations and components of MS are summarized in Table 4. *In* general, we observed a statistically significant tertile trend increase between tertile exposure to ozone and some of the parameters related to MS. The lags that showed a statistically significant association were those corresponding to 2, 7, 8, 11, and 13 days prior to the visit. We found an increase in triglyceride levels of 20.24 mg/dL ($95\%$ CI: 9.54, 30.95) for ozone exposure on lag day 2, as well as an increase of 12.55 mg/dL ($95\%$ CI: 1.44, 23.66) on lag day 11 for those in the third tertile relative to the lowest tertile for ozone exposure. Comparing in a similar way for those in the third tertile with the lowest tertile, regarding HDL cholesterol, a decrease in blood concentration of −2.56 mg/dL ($95\%$ CI: −5.06, −0.05) was observed in the ozone exposure on lag day 7; however, on lag day 8 of ozone exposure, the decrease was more significant: −3.46 mg/dL ($95\%$ CI: −5.96, −0.95). Blood pressure in general showed a statistically significant change on lag day 13, increasing by 1.14 mmHg ($95\%$ CI: 0.08, 2.2) in systolic blood pressure, while for diastolic blood pressure, the increase was 0.91 mmHg ($95\%$ CI: −0.03, 1.86), this being marginally significant. According to the adolescents who had the characteristics of metabolic syndrome, we observed an OR of risk of 2.23 ($95\%$CI: 1.10, 4.56) and 1.99 ($95\%$CI: 0.93, 4.25) for third tertile ozone exposure on lag days 2 and 3, respectively. We also observed increased risk with lag 15.
All models were carried out using the mixed-effects model with random intercept adjusted by physical activity, BMI, antioxidant intake (Vitamin C and Vitamin E), and asthma presence. Meteorological variables were tested as adjustment variables, but these were not statistically significant, nor did either change the sense of association between ozone and the metabolite evaluated (Table 4). Additionally, we evaluated these models by adjusting PM2.5 concentrations (as a continuous variable, not in tertiles) using the same lag time than ozone. The results are shown in Table 4. In most cases, the significance of ozone remains. PM2.5 was significant principally at lag 4 (Supplementary Materials).
## 4. Discussion
In this cohort study, we found that short-term ambient air exposure to ozone was significantly associated with an increase in some parameters related to MS in young populations. Although exposure to outdoor ambient levels of PM2.5, NO2,, and O3 has been associated with asthma, respiratory diseases, and respiratory symptoms in children mainly, there are few previous longitudinal studies that have studied this type of association in populations between this age range and particularly in obese adolescents; to our knowledge, this is the first study to evaluate these effects in a low–middle income country.
We found associations in different exposure lags between 2 and 14 days in most of the components of MS, which could lead adolescents, given their condition of obesity, to a greater risk of being classified as positive for metabolic syndrome based on the classification of the FID. The fact that some components were related to 2-day lags and more lag days may also be due to the possible correlation that exists in the pollutants due to weekly cyclical trends. In one study in rats, it was reported that short-term exposure to ozone can lead to higher levels of leptin and blood glucose, as well as other changes in metabolites involved with the metabolism of glucose, lipids, and amino acids after subjecting a group of rats for a few hours to high ozone concentrations [21]. Subsequently, the same researchers showed, in a controlled human study, that after short-term exposure to ozone circulating lipid metabolites were altered as a result of changes in metabolism, giving rise to the saturation of certain metabolic pathways [23], suggesting alterations in membrane phospholipids linked to proinflammatory mechanisms due to ozone. We believe that if these effects are sustained for longer periods of time, they can trigger permanent damage to the metabolic system or even the immune system. One study that indicated that long-term exposure to ambient air pollutants may increase the risk of metabolic syndrome, especially among males, results that are consistent with our findings; however, their results come from a cross-sectional study conducted in the adult population [29]. Another study reported a positive association between ozone exposures and type 1 diabetes as well as alterations in plasma lipid profiles and lower levels of glucagon-like peptide one after exposure to highly polluted air, respectively, indicating that sub-chronic exposure to ozone-induced beta-cell dysfunction may secondarily contribute to other tissue-specific metabolic alterations, due to an impaired regulation of glucose, lipid, and protein metabolism in young adult rats [30]. Similarly, an increase in oxidative stress has been observed in rats exposed to high concentrations of ozone, leading to mitochondrial DNA damage as well as an endothelial vascular decrease in nitric oxide synthetase, producing a significant increase in atherogenesis in comparison with rats that are exposed to filtered air, results that provide further experimental evidence for the possible link between air pollution and MS [31].
There are different hypotheses that describe the possible biological mechanisms that support our findings. Although these mechanisms are still not completely clear, it has been described that air pollutants may perturb autonomic nervous system balance by activating afferent pulmonary autonomic reflexes; additionally, when ozone enters the body through the respiratory tract, it reacts with the existing biomolecules in the fluid that covers the lungs, generating highly reactive products that enter the bloodstream, promoting cascade inflammation mechanisms that can lead to damage in the cardiac vasculature, which in turn can induce arrhythmias, myocyte reduction, contractility, and decreased coronary blood flow due to acute vasoconstriction, which can increase blood pressure [15,22,32]. Similarly, it has been described that exposure to air pollutants may induce the generation and release of endogenous pro-inflammatory mediators and vasculo-active molecules, which can disrupt insulin signaling and impair vasorelaxation [30,33]. The oxidative stress promotes the activation of Nrf2, the heat shock protein 70, NF-kB, increases the expression of a variety of proinflammatory cytokines (TNF-alpha and interleukin 1β), chemokines (Interleukin 8), and adhesion genes, and, finally, some studies report that air pollution exposure is associated with abnormal methylation levels of global DNA and specific genes involved in glucose homeostasis and lipid metabolism pathways [34,35]. One study in 2015 proposes that short-term exposure to ozone can increase circulating cortisol and is reflective of an activation of a neurohormonal-mediated stress response, likely through the activation of the HPA axis and altered lipid metabolic processes stimulating the adipose lipolysis of triglyceride stores and being liberated into the circulation. The increased lysolipids, likely released from the hydrolysis of cellular and membrane phospholipids, and serum polyunsaturated fatty acids in ozone-exposed humans may be linked to proinflammatory mechanisms due to ozone exposure and showed elevated circulating metabolites of β-oxidation and ω-oxidation; overall, this study demonstrates that ozone exposure in humans is associated with increased release of stress hormones causing lipolysis, as in rodents [21].
Some limitations must be considered when interpreting our results. First, daily variations in ozone air pollutant exposure were evaluated through the daily records of the fixed central monitoring locations (RAMA). The temporal variations in each adolescent’s exposure were assumed to follow those at the central monitoring site, and we did not obtain detailed information about the time spent by each participant; instead, we presumed that exposure was primarily associated with the amount of time spent in the outdoors. To strengthen the validity of this assumption, each adolescent was assigned to the monitoring site closest to his or her home or school by means of a spatial GIS, providing greater variability in the data. Second, the possibility that the results were a consequence of the poor control of confounders, such as socioeconomic status, however, is unlikely because all our participants came from the same study area and attended the same public school system; also, the design of the study excluded women with a high risk of pregnancy and/or pre-existing illness and the models were adjusted for potential confounders.
On the other hand, within the main strengths of this study, we can mention that it is a longitudinal study with a good participation rate ($89\%$ of participants from the cohort entered this evaluation). Additionally, the fact of having valuable information on other variables of importance at different times of the follow-up gives greater support to the findings. In this sense, being a cohort study, we can highlight that we had the possibility of observing that the exposure precedes the event, and our results were based on an observational analysis of the cohort; also, we used mixed linear multivariate models to account for the strong patterns of association among the outcome and exposure variables and for the control of confounders.
In this study, we adjusted for variation in physical activity, an important prediction factor in metabolic syndrome. In previous studies [23], it has been evaluated that certain metabolic changes due to ozone exposure can vary after exercising. Even so, we did not find significant differences when it was explored in a stratified way between adolescents who did vigorous activity and those who did not. Perhaps in a future study, it would be advisable to expand the sample size and evaluate these activities more precisely, as well as to support the results presented in this study.
## 5. Conclusions
Our data show that short-term ambient air ozone exposure is associated with the components of MS. These adverse effects were observed in a longitudinal setting in a free-living population, more specifically in a cohort of obese adolescents. These results could have significant public health policy implications, and derived that metabolic syndrome was defined by a combination of various cardiovascular disorders (hypertension, dyslipidemia), elevated triglycerides and lowered high-density lipoprotein cholesterol, raised fasting glucose, obesity, and is associated with systemic inflammation and increases the risk of cardiovascular disease in the early stages of life. |
# Prediction of Relevant Training Control Parameters at Individual Anaerobic Threshold without Blood Lactate Measurement
## Abstract
Background: Active exercise therapy plays an essential role in tackling the global burden of obesity. Optimizing recommendations in individual training therapy requires that the essential parameters heart rate HR(IAT) and work load (W/kg(IAT) at individual anaerobic threshold (IAT) are known. Performance diagnostics with blood lactate is one of the most established methods for these kinds of diagnostics, yet it is also time consuming and expensive. Methods: To establish a regression model which allows HR(IAT) and (W/kg(IAT) to be predicted without measuring blood lactate, a total of 1234 performance protocols with blood lactate in cycle ergometry were analyzed. Multiple linear regression analyses were performed to predict the essential parameters (HR(IAT)) (W/kg(IAT)) by using routine parameters for ergometry without blood lactate. Results: HR(IAT) can be predicted with an RMSE of 8.77 bpm ($p \leq 0.001$), R2 = 0.799 (|R| = 0.798) without performing blood lactate diagnostics during cycle ergometry. In addition, it is possible to predict W/kg(IAT) with an RMSE (root mean square error) of 0.241 W/kg ($p \leq 0.001$), R2 = 0.897 (|R| = 0.897). Conclusions: *It is* possible to predict essential parameters for training management without measuring blood lactate. This model can easily be used in preventive medicine and results in an inexpensive yet better training management of the general population, which is essential for public health.
## 1. Introduction
Reluctance to undertake physical activity and obesity are associated with an increase in cardiovascular diseases, in particular in coronary heart disease, diabetes mellitus and a higher level of inflammation [1,2,3]. Research has demonstrated that engaging in regular physical activity leads to a reduction in both morbidity and mortality rates [4,5]. As our society continues to face an increasing burden of disease from conditions such as diabetes mellitus, arterial hypertension, and obesity, the cost of treating these cardiovascular diseases will also become a growing financial strain on the healthcare system in the future [6,7,8,9,10,11]. Especially, during the COVID-19 pandemic, regular exercising decreased significantly [12]. Consequences may be not only increasing obesity and cardiovascular diseases but also mental health conditions [13,14]. The WHO Guidelines recommend regular activity (150–300 min per week of moderate intensity, or 150 min per week of intensive physical activity) [15]. In order to achieve comprehensive prevention, simple and inexpensive training recommendations and the prescription of physical activity are required [16]. Optimizing training intensity recommendations in cardiopulmonary training requires that the essential parameters heart rate (HR) and training load (W/kg) at individual anaerobic threshold (IAT) are known [17]. It is necessary to define individual training parameters, as several studies have confirmed that training adherence depends on training intensity [18,19]. Adherence to physical activity is one of the most relevant factors to better health [20]. Realization of exercise recommendations by health workers is reported to be insufficient [21,22], which might be caused by the lack of personalized trainings programs. Overexertion, defined as the transition from the aerobic to the anaerobic metabolism [23], may therefore reduce training adherence. To perform optimal training, knowing the heart rate at the individual anaerobic threshold is essential for better training control. Measuring these important parameters (HR(IAT) and W/kg(IAT)) is largely limited to competitive athletes in dedicated sports medicine centers. Performance diagnostics with blood lactate is one of the most established methods for these kinds of diagnostics [24,25], but it is time- and cost-intensive [26].
The prediction of IAT was early performed by Conconi et al. by determining heart rate threshold which shows a high correlation in runners [27]. There are only a few studies using linear regression models to predict anaerobic threshold on cycling ergometry in the general population. Mostly athletes were examined, and the number of examined subjects is low [27,28,29,30]. Simple methods for measuring performance are crucial for allowing the general population access to the appropriate training parameters, particularly for individuals who are new to sports. There is a lack of studies examining prediction models for essential training parameters in the general population using non-invasive methods [31,32,33,34]. The aim of this study was to assess HR(IAT) and W/kg(IAT) by linear regression models to establish an easy access training recommendation for the general population, as physical inactivity is an important predictor of mortality [35].
## 2. Methods
In this study, a retrospective analysis was performed. Secondary data of the Sports Medicine Institute of the University Medical Center Charité Berlin were analyzed for the prediction of HR(IAT) and W/kg(IAT) without lactate measurement. All of the ergometry protocols conducted between 2015 and 2017 were obtained from the institutional sports medicine information system. Patients who were not included in the study were excluded for specific reasons: For the present analysis, the following inclusion criteria were applied: patients (I) with missing lactate data, (II) with missing heart rate data, and (III) with insufficient protocols and implausible data. Exclusion criteria were cardio-pulmonary and musculoskeletal diseases. The study was conducted in accordance with the Declaration of Helsinki and with the approval of the local ethics committee of Humboldt University Berlin.
## 2.1. Peak Performance Test
The performance test on the cycle ergometer started at 50 Watt (W) and was raised in 25 W steps after 3 min. Resting heart rate, blood pressure and blood lactate were measured before the lactate step test was initiated. During the test, heart rate was continuously measured by electrocardiogram. Blood pressure, blood lactate and RPE (rate of perceived exertion) were measured in the last thirty seconds of each step. Determination of lactate threshold (LT = first significant increase in blood lactate during exercise test starting from the resting lactate values) and individual lactate threshold (IAT = second significant increase in blood lactate and transition from aerobic to anaerobic metabolism) were assessed using the method of Dickhuth et al. [ 36].
## 2.2. Statistical Analysis
The Kolmogorov–Smirnov test was used to determine whether the continuous variables were normally distributed, and a descriptive analysis was carried out. The power per kilogram body weight at the individual anaerobic threshold (W/kg(IAT)) was used as a measure of individual physical performance. The Pearson correlation coefficient and root mean square error (RMSE) were used to assess correlation. A two-sided significance level of α = 0.001 was set as the threshold for determining statistical significance. Before performing multiple regression analysis of HR (IAT), all parameters were checked individually for their respective correlation and linear regression with a very high level of significance $p \leq 0.001.$ *Descriptive analysis* was performed and is shown in Table 1. Minimum, mean and maximum HR and HR after one-, three- and five-minutes post-workout were examined. All parameters with a significance level $p \leq 0.001$ were removed. All statistical analyses were performed using the SPSS software (IBM Corp. Released 2016. IBM SPSS Statistics for Windows, Version 25.0. Armonk, NY, USA: IBM Corp.) and Matlab (MATLAB and Statistics Toolbox Release 2022b, The MathWorks, Inc., Natick, MA, USA).
## Study Population
The population consisted of 188 competitive athletes (football, handball, athletics, volleyball, etc.), 226 prevention and rehabilitation athletes (with various chronic diseases, e.g., orthopedic, rheumatological or other autoimmune diseases) and 820 recreational athletes. None of the athletes had known coronary artery disease or heart failure. A total of 579 had a BMI greater than 25. Overall, 141 individuals of the 226 prevention and rehabilitation athletes had a BMI greater than 25, 52 individuals in this subgroup had a BMI greater than 30, and 13 individuals had a BMI greater than 35. Descriptive analysis is shown in Table 1.
## 3. Results
We performed multiple linear regression analyses for both HR(IAT) and W/kg(IAT) using personal parameters such as gender, age, height and weight as well as performance measurements such as heart rate and power as input parameters.
After each multiple regression analysis, we removed one parameter with the highest p-value until the desired significance level of $p \leq 0.001$ was met by all remaining input parameters.
After completing this process, the following input parameters are included in the multiple linear regression analysis for determining the HR(IAT); see equation in Figure 1: gender; weight; mean power (Pmean); maximum power (Pmax); mean HR (HRmean); and minimal HR (HRmin). Using these parameters in multiple linear regression, the determination of HR(IAT) is possible with an RMSE = 8.77 bpm. The adjusted R-squared is 0.798.
The proposed linear regression model for determining HR at IAT was compared to the *Karvonen formula* (Figure 2). The proposed method shows a lower RMSE (8.77 bpm) than the *Karvonen formula* (RMSE of 11.2 bpm), and HR determination at IAT is more exact using linear regression.
The essential parameter W/kg(IAT), which is especially important for determining changes in performance, was also examined. As explained above, the respective input parameters were iteratively removed unless they met a level of significance $p \leq 0.001.$ This includes the removal of the heart rate recovery (HRR = HRmax-HR after 5 min of recovery) parameter, as its significance level was $$p \leq 0.057.$$ *As a* result, only the following four parameters were included for multiple linear regression analysis to determine W/kg(IAT): gender; body weight (kg); mean power (Pmean); maximum power (Pmax); maximum HR (HRmax). Using these parameters in multiple linear regression (Figure 3), the determination of W/kg(IAT) is possible with a root mean square error, RMSE = 0.241 W/kg. The adjusted R-squared was 0.897. Figure 3 shows the comparison between W/kg(IAT) values on the horizontal axis determined by means of blood lactate values and the W/kg(IAT) values on the vertical axis determined by means of multiple linear regression.
To better understand the impact of individual input parameters on the W/kg(IAT), we have visualized the regression parameters using an effect plot in Figure 4. For this, we multiplied the weights of the formula in Figure 3 with the actual values in our database. The latter are normalized by subtracting their respective mean values, as this offset is already modeled in linear regressions, in our case 2.2306 W/kg.
## 4. Discussion
This retrospective analysis of this dataset was examined to predict heart rate at IAT as well as training load (W/kg) at IAT without measuring blood lactate values for cycle ergometry. Both heart rate and the number of watts at the individual anaerobic threshold are essential parameters for training control. These parameters are currently best determined via blood lactate diagnostics during ergometry performance testing. A total of 1234 performance protocols with blood lactate in cycle ergometry were analyzed. Multiple linear regression analyses were performed to predict the essential parameters heart rate at individual anaerobic threshold (HR(IAT)) and workload at individual anaerobic threshold (W/kg(IAT)) by using routine parameters for ergometry without blood lactate. HR(IAT) can be predicted with a root mean square error, RMSE of 8.77 bpm ($p \leq 0.001$). The intention of this regression model is the acceleration of preventive medicine by using every ergometry to compile an individual training recommendation in primary and secondary prevention. At once, the greatest challenge and the utmost benefit is a continuous training adherence. To avoid overexertion, knowing the individual anaerobic threshold is necessary. This applies for preventive medicine as well as for pre-habilitation to meet the proposed exercise recommendation of 150–300 min per week by the WHO. Future work implies to supervise pre-habilitation patients with the recommended regression model, as standard cycle ergometry can be performed by every general practitioner, and patients can be examined close to the place of residence.
There is a need for more research in preventive medicine that focuses on developing better preventive training control methods for the general population. The main leverage point is the prevention of cardiovascular diseases, which is one of the most causes of morbidity and mortality. Furthermore, as societies are becoming older, frailty amongst the elderly population is a growing financial burden [37]. Increasing frailty goes in line with a decrease in quality of life. Regular physical activity can reduce frailty [38], and research has also shown an improvement for quality of life [39]. This challenge for health systems needed to be addressed by establishing easy access methods for training control parameters and training programs for the main population.
Although lactate performance diagnostics is a well-established method for recording performance, methodological errors must be considered. Due to constantly increasing blood lactate values and only intermittently measured lactate values by using the capillary blood of the earlobe, measurement inaccuracy must be assumed. Furthermore, certain nutritional methods (e.g., low-carb) are associated with an altered lactate curve [40]. Due to glycogen depletion, incorrect low blood lactate is measured, which can lead to a misinterpretation of the lactate curve. A large number of studies examined different threshold models, whereby an exact determination of the aerobic and anaerobic threshold is better to be regarded as an aerobic and anaerobic transition [41,42]. The earlier assumption of fixed aerobic and anaerobic thresholds soon showed individual differences in further investigations and the need to consider individual threshold methods. Despite these challenges of metabolic threshold models, the lactate determination for training recommendation was established as daily routine in contrast to other methods, in the last decades. The cardio-pulmonary exercise test (CPET), which is applied to determine VO2max and ventilatory thresholds, is also a method of assessing an individual’s physical fitness. For the collection of respiratory and metabolic parameters, this is a complex measurement, and expensive equipment with regular calibration is needed. This method is significantly more time-consuming, requires special trained nurses or sport scientists, and is therefore primarily reserved for patients with cardiac and pulmonary diseases. The RPE and the walking test are simple methods to avoid overexertion in preventive and recreational sports. Nevertheless, the application of RPE is difficult for people who are inexperienced in sports and can easily lead to overexertion or unchallenged activity. In preventive medical examination, an individual training recommendation is increasingly demanded by patients.
It could be shown that the lactate accumulation shows inter-individual differences [43,44,45], and fixed submaximal threshold concepts (of 2 mmol/and 4 mmol/l) should not be applied for individual training recommendations. The lactate concentration in the aerobic–anaerobic transition range is also dependent on muscle recruitment in different movement patterns [46,47]. Individual training recommendations should therefore be specific to the sport. Several studies have been able to prove the training effect of exercise based on the individual anaerobic threshold. The determination of the HR(IAT) during the ergometry without lactate diagnostics can be used for recommendations of basic endurance and interval training and for prevention programs.
Lactate measurement examination is an expensive, as lactate measurement equipment and special trained nurses are required, and time-consuming examination and has until now rarely been covered by health insurance companies. Depending on the individual, the test takes forty to fifty minutes, including warm-up, measuring resting heart rate and recovery time in the end. In various studies, the lactate transition range and the maximum lactate steady state showed a connection with hormonal and immunological changes, which at least supports the assumption of an upper anaerobic threshold [48,49,50]. Therefore, the individual anaerobic threshold should be considered when making training recommendations for the general population, since long-term training with a disproportionate increase in lactate can lead to training non-adherence and vulnerability for infections or injuries, thus bringing the known advantages of regular physical activity [49,50]. Our regression model allows a good prediction of HR(IAT) with an RSME 8.77 bpm and a prediction of W/kg(IAT) with a deviation of 0.241 W/kg. Shen et al. examined the velocity at lactate threshold on a treadmill by using several prediction models with different heart rates [31]. As with the data in this study, age was not a significant parameter and was excluded in the regression models. However, body mass index was excluded [31], and this study only included body weight for predicting W/kg(IAT). Interestingly, women seem to show a slightly higher W/kg unless body height is considered to be negative, in which case these effects neutralize each other. Differences between the results of Shen et al. also might be attributed to different physical activity on a treadmill and a cycle ergometry [31]. Sport-specific differences for HR(IAT) and W/kg(IAT) needed to be considered [51,52,53], and further research on regression models for running and rowing should be addressed in future studies.
The exclusion of heart rate recovery for the prediction of W/kg(IAT) was justified by not meeting the significance level of $p \leq 0.001.$ This suggests that HRR may not be a singular predictor for evaluating physical fitness [54]. As research results are inconclusive and the evidence is weak [55,56,57], further research should be performed in larger studies. However, HRR should be recorded as a longitudinal parameter [54], since changes in HRR showed good results in recognizing cardiopulmonary diseases [58,59]. In this context, HRR is an essential parameter, which should be monitored regularly to detect changes in autonomic function [60].
The *Karvonen formula* is mainly used for training control in popular sports. The *Karvonen formula* uses the heart rate reserve, and it requires that the maximum heart rate and resting heart rate are determined to apply the formula [61]. By multiplication with a fitness level factor (0.8 for athletes; 0.6 for recreational athletes; and 0.3 for untrained people), the heart rate at the anaerobic threshold can be calculated [61]. The *Karvonen formula* was applied to the examined measurement protocol results in this study. The results of using the *Karvonen formula* with a factor of 0.7, due to the predominantly athletic clientele of the sports medicine university outpatient clinic, are shown in Figure 2. In comparison to the measured heart rate at IAT, the scatter diagram reveals a good correlation with a higher RMSE of 11.2 bpm in comparison to the regression model of this study (RMSE 8.77 bpm). The shape of the curve indicates an overestimation of the low values and an underestimation of the high HR values at the IAT. The *Karvonen formula* also uses resting HR and maximum heart rate to determine HR(IAT). Both heart rates are individual values, and maximum heart rate is especially difficult to determine for the general population, especially as maximum heart rate changes with age [62,63,64]. Thus, an initial determination of maximum heart rate is also required for the *Karvonen formula* and should be acquired under medical supervision, especially for individuals > 35 years to cardiovascular adverse events. Due to the improved prediction based on a regression model determined in this study, we recommend cycle ergometry in medical supervision with the regression model identified in Figure 1.
In preventive medicine, ergometry is also recommended for every sports beginner and returner over the age of 40 (for men) and over 55 (for women), according to the German guideline for preventive medical check-ups in sports. In contrast to lactate performance diagnostics, ergometry can be carried out by almost any general practitioner or as part of an occupational medical examination. However, an individual training recommendation is usually only given by sports physicians, since a respective specialization for individual training advice is missing. Due to the increasing number of cardiovascular events, obesity and an increasingly aging population, there is a health gap to reach the general population with individual training recommendations and to examine the full scope of preventive medicine. The proposed regression model differs from other studies with a significantly higher number of study protocols examined in a heterogeneous population [27,28]. In addition, further research should examine whether shorter exercise tests, such as the 6-MWT (6-min walking test), can be used for a regression model prediction of essential parameters for training control [65]. Studies in obese individuals demonstrated promising potential to assess individual respiratory threshold [65,66,67]. Especially obese and older subjects or individuals with other disabilities which rule out cycle ergometry might benefit [65]. Thus, a regression model for cycle ergometry with a shortened protocol should be addressed in future studies, as these shorter tests can be performed more regularly to examine training improvement and address the changed HR(IAT) after consistent training [66]. Considering the results of this retrospective study, we recommend the output of a training program with an individual training heart rate at IAT and watt range at IAT, provided after every check-up examination using cycle ergometry in medical supervision, including the recommendation of the WHO [15].
## Future Work
As it is known that also children and adolescent obesity has been continuously rising during the last few years [68], there is a need to find approaches for physical activity in these age groups, since chronic diseases will start in early ages and will have a huge impact on GNP. Further studies are necessary to establish regression models for HF(IAT) for adolescents to teach them a healthy and adequate regular exercise program with a potentially better exercise adherence, since exercise adherence is one of the most encouraging parameters for health [20]. These exercise programs should be established in schools under supervision and with regular physical examinations. Furthermore, research has shown that pre-habilitation can be a relevant benefit prior to chemotherapy or extensive operations [69,70]. As neoadjuvant chemotherapy is associated with decreasing aerobic endurance [70,71], there is a need for easy access training therapy not only in primary but also in secondary and tertiary prevention. Measuring HF(IAT) at routine secondary and tertiary preventive examinations may improve exercise adherence; further research in these subgroups is necessary.
Further goals of these examinations are an establishment of pre-habilitation offers close to home, besides the expansion of the preventive individual training recommendations for the general population. An individual training recommendation for pre-habilitation could therefore be made directly by the attending general practitioner or cardiologist. A gap in care, of mostly only a few sports medicine offers, could thus be closed. Further examinations with other ergometer types (rowing ergometer, elliptical) are planned in order to enable a conversion of HR(IAT) and different ergometer types in prevention and pre-habilitation.
## 5. Limitations
Incorrect entries during the manual transmission of the lactate values must be considered. These were minimized in advance by means of a plausibility check of the entire data set. The sample size in this study is appropriate for generating a valuable prediction in comparison to other studies [31,72]. The age and gender distribution may vary in comparison to the general population, as the examined population includes more physically active individuals, especially in the younger age groups. Furthermore, a heart rate deviation of 8.77 bpm is not appropriate for athletes in professional sports, although a blood lactate test or cardiopulmonary exercise test (CPET) is still recommended for this clientele. This regression model is suitable for cardiorespiratory endurance sports. It should be noticed that it is not applicable for resistance or interval training; individual training recommendations for these kinds of training should be considered.
At the same time, ergometry offers a simple and inexpensive measuring method that can be performed in the outpatient and inpatient sector and represents a suitable procedure for popular sports and preventive medicine to monitor cardiorespiratory training.
## 6. Conclusions
In conclusion, it is possible to derive relevant parameters for training control after a standard cycle ergometry without performing a blood lactate test by using regression models to predict HR(IAT) and W/kg(IAT) for the general population. This enables training control without blood lactate diagnostics or CPET and does achieve enormous time and financial savings for active exercise therapy as well as for preventive and rehabilitative medicine. Regular individual test repetition allows the consideration of short-term training adaption and supports continuous training progress. |
# Eating Behavior and Obesity in a Sample of Spanish Schoolchildren
## Abstract
From the point of view of prevention, it is convenient to explore the association between eating behavior and the obese phenotype during school and adolescent age. The aim of the present study was to identify eating behavior patterns associated with nutritional status in Spanish schoolchildren. A cross-sectional study of 283 boys and girls (aged 6 to 16 years) was carried out. The sample was evaluated anthropometrically by Body Mass Index (BMI), waist-to-height ratio (WHtR) and body fat percentage (%BF). Eating behavior was analyzed using the CEBQ “Children’s Eating Behavior Questionnaire”. The subscales of the CEBQ were significantly associated with BMI, WHtR and %BF. Pro-intake subscales (enjoyment of food, food responsiveness, emotional overeating, desire for drinks) were positively related to excess weight by BMI (β = 0.812 to 0.869; $$p \leq 0.002$$ to <0.001), abdominal obesity (β = 0.543–0.640; $$p \leq 0.02$$ to <0.009) and high adiposity (β = 0.508 to 0.595; $$p \leq 0.037$$ to 0.01). Anti-intake subscales (satiety responsiveness, slowness in eating, food fussiness) were negatively related to BMI (β = −0.661 to −0.719; $$p \leq 0.009$$ to 0.006) and % BF (β = −0.17 to −0.46; $$p \leq 0.042$$ to $$p \leq 0.016$$).
## 1. Introduction
Eating habits acquired during childhood and adolescence tend to become established during adulthood. For this reason, achieving a healthy diet at an early age is a definite factor in avoiding obesity and chronic diseases. Several experimental studies and reviews on the subject have shown that parents have a strong influence on their children’s eating behavior [1,2]. This is particularly important in childhood, during which children learn what, when and how to eat according to the cultural transmission of family patterns and attitudes [3,4]. Parental prohibition or restriction of food, or the use of food as a reward, are factors that impact the emotional domain and predict children’s enjoyment of food or their response to satiety [5]. Similarly, healthy nutrition education by families is associated with positive attitudes towards food and appropriate regulation of food intake which is reflected in children’s improved nutritional status [6]. Obviously, parents also pass on their genes, which also play a proven role in the regulation of appetite and food preferences [7,8,9]. In any case, eating behavior, which undoubtedly has a genetic and environmental component, is reflected in the nutritional condition of the subject and modulates the risk of obesity.
Different studies conclude that the capacity to respond to satiety is lower in overweight children and adolescents, especially in those who are obese, as well as a more noteworthy response to food cues. They have understood this as a higher desire to eat and greater likelihood of ingestion in the presence of food. For this reason, overweight children and adolescents seem to be more likely to eat food in the absence of hunger, out of mere desire or pleasure [10]. In addition, food enjoyment and speed of intake appear to be higher in obese children, who have a delayed sense of satiety [11]. Therefore, this bidirectional association leads to children with a greater enjoyment or taste for food being at greater risk of obesity [12]. It is worth noting that a greater increase in intake under emotional stress has also been observed in overweight children and adolescents compared to medium and underweight subjects [13,14]. However, the results in this aspect are controversial as recent meta-analysis studies show that the relationship between emotional intake and body composition is not as direct in children and adolescents as in adults [15]. Consequently, it is necessary to explore the association between eating behavior and the obese phenotype during the school and adolescent age range.
Previous findings show the usefulness of analyzing the eating behavior of children in detail using questionnaires such as the Children’s Eating Behavior Questionnaire (CEBQ) [16]. This test identifies different phenotypes related to habits such as food avoidance, early or late satiety, gluttony, or tendency for emotional overeating, habits that may eventually alter nutritional status [17,18]. Research using the CEBQ relates overweight and obesity in children and adolescents with higher scores on the pro-intake scales and lower scores on the anti-intake scales, pointing to higher consumption and enjoyment of food, lower satiety and more emotional overeating behaviors. Conversely, low weight is associated with lower scores on the pro-intake scales and higher scores on the anti-intake scales, relating to avoidance eating behaviors, early satiety and lower enjoyment of food [19].
Initially used in British children [16], the CEBQ has been applied to schoolchildren from different populations, such as the United States [20], Sweden [21], Saudi Arabia [22], Bosnia [23], Portugal [24] and Chile [25]. In Spain, the only precedent is the study of Jimeno Martinez et al. [ 26] as part of the MELI-POP (Mediterranean Lifestyle in Pediatric Obesity Prevention) pilot study. On the other hand, in most of the mentioned studies, the association between eating behavior assessed by CEBQ and obesity has been established through weight and BMI, with very few studies that include other indicators of adiposity [27]. For this reason, the main objective of the present study is to identify, in a sample of Spanish schoolchildren, the eating behavior associated with nutritional status assessed by anthropometric parameters that identify, in more outstanding detail, body composition and fat distribution.
## 2.1. Participants
This is a cross-sectional study in a convenience sample of 283 Spanish schoolchildren aged 6 to 16 years ($33.21\%$ [94] girls); ($66.69\%$ [189] boys). A total of $54.6\%$ were aged between 6 and 10 years (107 boys and 48 girls). The remaining $45.40\%$ (84 boys and 44 girls) were between 11 and 16 years of age. The sample was recruited between 2019 and 2021 in public schools and municipal sports centers in middle-class neighborhoods in the Community of Madrid, Spain. In these sports centers, schoolchildren perform soccer, basketball, gymnastics, or swimming activities as part of after-school classes.
In $42.20\%$ of families, both parents had primary education. In $25.30\%$, at least one parent had secondary or university education and in $32.50\%$ of the cases, both parents had advanced specific vocational training or university education. All the schoolchildren performed between 100 and 120 min of physical activity per week during school hours in two sessions. A total of $93.20\%$ also participated in out-of-school physical activity (mean = 3.61 SD = 1.84 h/week) with no differences between sexes (Table A1).
Data collection was carried out as part of a school health program developed by the Spanish Society of Dietetics and Food Sciences in coordination with local councils. It should be noted that data collection was partially affected by the COVID 19 pandemic, which forced special precautions and decreased the potential number of children finally included in the present study. The data were anonymized and were disaggregated from information that could identify the subject. Participants’ assent and informed consent from parents or guardians were required following the bioethical principles of the Declaration of Helsinki in its most updated version [28]. The Ethics Committee approved the project of the Autonomous University of Madrid (CEI-91-1699).
## 2.2. Instruments
Each participant was assessed anthropometrically through direct measurements, body composition indicators and adiposity distribution. Their parents or guardians completed the CEBQ [16] questionnaire.
## 2.2.1. Anthropometric Study
The anthropometric assessment was carried out according to the protocol of the International Biological Program (IBP) [29]. Height (cm) was measured with a Tanita Leicester measuring rod with an accuracy of 1 mm; weight (kg), umbilical waist circumference (cm) with a Cescorf tape and bicipital, tricipital, subscapular and suprailiac skinfolds (mm) with a Holtain adipometer with an accuracy of 0.2 mm and constant pressure (10 g/mm2).
For prevalence analysis, the sample was stratified by sex. Nutritional categories were established based on the Body Mass Index [BMI = weight (kg)/height (m2)] using the cut-off points of Cole et al. [ 30,31] and the waist-to-height ratio (WHtR = waist circumference/height), using the criteria established by Marrodán et al. [ 32] which define abdominal obesity as >0.51 in boys and 0.50 in girls, and abdominal overweight as >0.48 in boys and >0.47 in girls. Body fat percentage (%BF) was estimated by plicometry using the Siri equation [33], with a previous calculation of density [34,35]. Adiposity levels were classified according to the references for the Spanish youth population [36].
## 2.2.2. CEBQ Questionnaire
As indicated above, the CEBQ [16], provides information on the response to satiety, taste for food, speed of intake, and emotional food consumption. It is a validated questionnaire with 35 items that assess eight sections of eating behavior and whose questions are answered on a Likert-type scale with an option to score from 0 to 4 according to the intensity of the behavior (where 0 = never, 1 = rarely, 2 = sometimes, 3 = often and 4 = always).
The items are classified into eight subscales: food responsiveness (FR; 5 items), enjoyment of food (EF; 4 items), emotional overeating (EOE; 4 items), desire for drinks (DD; 3 items), slowness in eating (SE; 4 items), satiety responsiveness (SR; 5 items), food fussiness (FF; 6 items) and emotional under-eating (EUE; 4 items). The first four items (FR, EF, EOE and DD), have a positive focus or pro-intake dimension, while the last four (SE, SR, FF and EUE) relate to anti-intake habits. Pro-intake behaviors integrate those habits that favor food consumption, while anti-intake behaviors encompass those habits that lead to avoidance of food consumption. The questions corresponding to each subscale are defined according to the CEBQ’s classification (Table A2). The Spanish version of the CEBQ has been validated [26] and used previously [37].
## 2.3. Statistical Procedures
The internal consistency of the eight subscales of the CEBQ questionnaire and reliability estimates were determined using Cronbach’s alpha. Depending on the normality of the variables, ANOVA, Mann Whitney U tests were performed to compare the mean scores of each subscale of the CEBQ according to nutritional categories. Logistic regression models were applied to establish, as independent variables, the CEBQ subscale score and, as dependent variables, nutritional categories categorized dichotomously according to excess weight, abdominal obesity or high %BF. In these models, sex, age and level of physical activity previously coded according to WHO recommendations were included as covariates [38]. Statistical analysis was performed using R 4.1.2 software. Statistical significance was considered when $p \leq 0.05.$
## 3.1. Internal Consistency of the Subscales and Factor Structure of the CEBQ Questionnaire
First, the internal consistency of the CEBQ questionnaire in the present sample was assessed using Cronbach’s Alpha. Internal consistency was adequate (Cronbach’s alpha above 0.7) for all factors except subscales 1 and 8. The unweighted mean factor scores (±SD) and internal reliability estimates (Cronbach’s Alpha) for the CEBQ factors are presented in Table 1.
## 3.2. Sample Characterization
According to BMI, $6.70\%$ of the participants were underweight and $35\%$ had excess weight ($24\%$ overweight and $11\%$ obese). Regarding the WHtR, $14.80\%$ were overweight, and $31.80\%$ abdominal obese. According to %BF, $51.20\%$ were classified as having high adiposity ($19.40\%$ between 90th–97th percentiles and $31.80\%$ > 97th percentile). Significant differences were found between sexes in the categorization of the sample based on BMI, WHtR and %BF ($p \leq 0.001$ *), with the male sex having the highest percentage of overweight in all three classifications (Table A3).
## 3.3. Comparison between Mean Scores of CEBQ Scales and Nutritional Status
Figure 1, Figure 2 and Figure 3 show a clear trend towards higher scores on the pro-intake subscales and lower scores on the anti-intake subscales as BMI, abdominal obesity, and relative adiposity categories increase. Figure 1 represents separately the trend of the mean scores on the pro-ingestion and anti-ingestion scales, classified according to the nutritional category of each participant according to the body mass index (BMI) categories [30,31]. The trend observed is that the higher the level of overweight, the higher the mean score on the pro-intake scales and the lower the score on the anti-intake scales. Figure 2 represents the trend of the mean scores on the pro-intake and anti-intake scales according to the nutritional category of the sample diagnosed from the waist-to-height ratio (WHtR) [32]. Participants with overweight or abdominal obesity achieved higher mean scores on the pro-intake scales and lower scores on the anti-intake scales. Figure 3 represents the trend of the mean scores on the pro-ingestion and anti-ingestion scales as a function of the nutritional category established on the basis of body fat percentage (%BF) [36]. *The* general trend observed is that the higher the percentage of body fat, the higher the mean score achieved in the pro-intake scales and the lower in the anti-intake scales.
Table 2 compares the mean scores of the different subscales of the CEBQ as a function of nutritional status as assessed by BMI, WHtR and %BF. In the pro-intake dimension, scores for the subscales EF, FR and EOE were higher ($p \leq 0.05$) in overweight schoolchildren according to BMI or above the cut-off point for WHR and %BF. The score for the DD subscale was higher only for the abdominal obese. On the other hand, they obtained lower scores ($p \leq 0.05$) for the SR and SE subscales for the anti-intake dimension than their no obese peers.
As the regression model (Table 3) shows, in general terms, higher mean scores on the pro-intake scales translate into a higher risk of excess weight, abdominal fat, or high %BF. For example, each point scored on the FR and EOE subscales increases the risk of overweight by 2.385 and 2.253 times, respectively. Likewise, each point obtained in the EF subscale increases the likelihood of having high adiposity by 1.8 times. In contrast, the higher the score on the anti-intake subscales (SR and SE), the lower ($p \leq 0.05$) the risk of being overweight or obese, and the lower the risk of having a high %BF.
## 4. Discussion
Previous research yields results similar to those obtained in our study, showing a significantly lower satiety response capacity in children and adolescents with obesity, as well as a greater enjoyment of food, high responsiveness to external stimuli associated with increased food intake, and a tendency to eat at a faster rate [24,39,40]. Two recently published major studies provide a comprehensive review of eating behaviors linked to childhood obesity, with an emphasis on appetite control and satiety regulation. They have shown that aspects such as satiety responsiveness, responsiveness to food and the tendency to overeat, which are collected in CEBQ, are positively associated with BMI in children [41,42]. Several theories have been put forward to explain delayed satiety in overweight schoolchildren. These include the ability to ingest food without hunger, larger gastric size, metabolic-hormonal dysregulation associated with appetite–satiety control, and greater sensitivity to external factors that predispose to caloric, fatty or sweet products [43]. Similarly, emotional overeating, primarily associated with situations such as anxiety or boredom, or emotional eating due to food restrictions, is associated with an increased risk of developing obesity. On the other hand, several studies suggest that non-hunger eating may be an exciting predictor of weight and obesity at an early age, although the evidence is limited. This is because children who eat more in the absence of hunger are more likely to be able to eat again in a shorter time after a meal, especially more palatable, high-fat and high-calorie foods [44].
A sample of 240 Portuguese schoolchildren aged 3–13 years also found a significant association between scores on all pro-intake subscales of the CEBQ and increased risk of elevated BMI. In particular, the risk of obesity was associated with a weaker satiety response and greater food enjoyment [24]. Another study in Portugal involving 2951 schoolchildren concluded that high scores on the pro-intake and low scores on the anti-intake subscales at seven years of age were associated with increased cardiometabolic risk at ten years of age and vice versa [40]. Similar research involving 406 London schoolchildren aged 7–12 years found significant associations between subscales of emotional overeating, increased enjoyment of food, and increased desire to drink with higher adiposity and weight [39]. However, as in the present study, no relationship was observed between EUE score and nutritional status. It is worth noting that some review papers report a close relationship between EOE and emotional disturbances, especially if they are of a negative nature [42]. At the same time, other authors underline an evolutionary tendency to overeat, which generally promotes a higher intake of snacks and low-quality foods [45].
Our results are also consistent with previous findings on the association between lower scores on the anti-intake subscales of the CEBQ in overweight schoolchildren and higher scores in underweight schoolchildren. In particular, a study with a sample of 7295 schoolchildren from the Generation R Study cohort found that children rated by the CEBQ as “more irritable towards food,” less enjoyable, more avoidant, or more likely to be satiated sooner, had significantly lower BMI and %BF [46]. Similarly, a study involving 2500 schoolchildren aged 3–10 years in Bosnia and Herzegovina also found a linear increase in BMI as a function of scores on the pro-intake subscales, except for the desire to drink, and a decrease in BMI as a function of scores on the anti-intake subscales [23]. *In* general, underweight and normal-weight schoolchildren appear to exhibit certain behavioral traits that protect against the obesogenic environment, while overweight schoolchildren exhibit the opposite traits considered risk factors, supporting the theory of “behavioral susceptibility to obesity” [47].
Several lines of research reflect the possibility that overweight children may have been more vulnerable to the obesogenic environment. This means they have been more receptive to advertising and other external stimuli that encourage a higher intake of caloric and unhealthy products. In addition, behavioral patterns predisposing to obesity that begin in childhood may become more pronounced in adolescence and even more so in adulthood [48]. Since interventions to modify eating behavior are more effective at earlier ages, it is of interest to prevent overweight and obesity and to understand the eating behavior of children and adolescents by using validated questionnaires for an individualized approach [49].
The present study has some limitations. As indicated in the material and methods section, fieldwork was conducted during the COVID-19 pandemic. Although children attended school and the sports center relatively usually, security measures slowed anthropometric measurements and limited the number of subjects finally included in the study. It was impossible to obtain a sufficient sample size to separate by age group. On the other hand, it is possible that the COVID-19 pandemic had some effect on the eating behavior of schoolchildren. Another issue is that an exclusively anthropometric nutritional diagnosis was performed, assessing both the weight status and the amount and distribution of fat. Moreover, we have tried to associate eating behavior with this physical condition. For this nutritional diagnosis, we did not use blood biochemistry indicators, as this was not the aim of the study.
It should be noted that the subjects in the sample are school children, and a certain number of them eat part of their meals at school. For this reason, the answers to the test refer exclusively to the eating behavior of the children at home. Finally, as a limitation to be taken into account, we should mention that Cronbach’s alpha, which measures the reliability of internal consistency, is questionable for subscale 8 (FF) of the CEBQ. However, other authors have obtained similar values for this same subscale. Such is the case of Gao et al. [ 50], who, analyzing a sample of Chinee schoolchildren, estimated a score of 0.49 for this item.
In the near future, we intend to analyze whether the pro-intake and anti-intake subscales of the CEBQ also show an association with a genetic risk score constructed from a battery of SNPs that we found to be associated with the anthropometric obesity profile in children [51]. We will thus verify whether eating behavior mediates the phenotypic expression of the genetic component of childhood obesity.
## 5. Conclusions
The present study shows the apparent association between anthropometric nutritional status and scores on the subscales of the psychometric test CEBQ. In all pro-intake subscales, schoolchildren with overweight, abdominal obesity or high %BF scored higher. In contrast, in the anti-intake subscales, the average scores were lower than those of their normal-weight peers. This confirms that overweight or obese schoolchildren have a lower satiety response, faster food intake and a pattern of emotional overeating. Given the association between eating behaviors and obesity, it would be essential to know the food-related behavior pattern of the child and adolescent population for a more complete and comprehensive nutritional approach. In this sense, tools such as the CEBQ can be very useful. |
# FHL2 Genetic Polymorphisms and Pro-Diabetogenic Lipid Profile in the Multiethnic HELIUS Cohort
## Abstract
Type 2 diabetes mellitus (T2D) is a prevalent disease often accompanied by the occurrence of dyslipidemia. Four and a half LIM domains 2 (FHL2) is a scaffolding protein, whose involvement in metabolic disease has recently been demonstrated. The association of human FHL2 with T2D and dyslipidemia in a multiethnic setting is unknown. Therefore, we used the large multiethnic Amsterdam-based Healthy Life in an Urban Setting (HELIUS) cohort to investigate FHL2 genetic loci and their potential role in T2D and dyslipidemia. Baseline data of 10,056 participants from the HELIUS study were available for analysis. The HELIUS study contained individuals of European Dutch, South Asian Surinamese, African Surinamese, Ghanaian, Turkish, and Moroccan descent living in Amsterdam and were randomly sampled from the municipality register. Nineteen FHL2 polymorphisms were genotyped, and associations with lipid panels and T2D status were investigated. We observed that seven FHL2 polymorphisms associated nominally with a pro-diabetogenic lipid profile including triglyceride (TG), high-density and low-density lipoprotein-cholesterol (HDL-C and LDL-C), and total cholesterol (TC) concentrations, but not with blood glucose concentrations or T2D status in the complete HELIUS cohort upon correcting for age, gender, BMI, and ancestry. Upon stratifying for ethnicity, we observed that only two of the nominally significant associations passed multiple testing adjustments, namely, the association of rs4640402 with increased TG and rs880427 with decreased HDL-C concentrations in the Ghanaian population. Our results highlight the effect of ethnicity on pro-diabetogenic selected lipid biomarkers within the HELIUS cohort, as well as the need for more large multiethnic cohort studies.
## 1. Introduction
Type 2 diabetes mellitus (T2D) is a highly prevalent and complex metabolic disorder affecting millions of people worldwide [1]. T2D is characterized by insulin resistance accompanied by progressive pancreatic β-cell failure, leading to hyperglycemia [2]. Additionally, it is known that approximately $50\%$ of T2D patients develop dyslipidemia, resulting in increased fasting triglycerides (TG) and low-density lipoprotein cholesterol (LDL-C) concentrations, as well as decreased high-density lipoprotein cholesterol (HDL-C) concentrations [3,4,5,6]. The combination of hyperglycemia and dyslipidemia is a strong driver of cardiovascular disease. Furthermore, while the global prevalence of T2D is on the rise, it also appears that there are differences in the risk of developing T2D, dyslipidemia, and associated cardiovascular complications across ethnic groups [7,8], which may be caused by differences in the genetic background of individuals. Indeed, several genome-wide association studies (GWASs) have linked multiple single-nucleotide polymorphisms (SNPs) to T2D [9] and dyslipidemia [10]. However, many of these GWASs have been conducted in mostly European descent populations and provide less insight into the contribution of genetics to differences in T2D and dyslipidemia across ethnic groups.
Four and a half LIM domains 2 (FHL2) is a member of the FHL domain family of proteins. FHL2 is expressed most abundantly in the heart and muscles, and to a lesser extent in other organs [11,12,13,14]. FHL2 serves as an interaction platform that acts through various protein–protein interactions [15,16]. Upon binding to a target protein, FHL2 may either enhance or repress the binding of the target protein to another protein or may alter the conformation of the target protein [17]. Through binding with a target protein, FHL2 can regulate various protein signaling pathways [15]. Thus far, FHL2 has been researched extensively in the field of oncology and cardiovascular diseases, as well as inflammation and cell differentiation, although far less is known regarding its involvement in metabolism [11,18,19,20,21]. It is only recently, however, that publications have surfaced which demonstrate a link between FHL2 and metabolism. As such, GWASs have shown an association between FHL2 loci and body mass index (BMI) [22,23]. Interestingly, studies have also implicated FHL2 in glucose metabolism and diabetes-related complications [24,25,26]. Most recently, we demonstrated that FHL2-deficient mice are protected from weight gain on a high-fat diet. These mice show increased energy expenditure involving browning of the white adipose tissue and increased glucose uptake in the heart [27]. In line with these observations, we confirmed that, in human adipose tissue, the expression of FHL2 negatively associates with the expression of browning genes. Additionally, we also showed that FHL2 expression was higher in individuals with T2D than non-diseased individuals using publicly available human pancreatic islet datasets and that FHL2-deficient mice possessed improved glucose clearance compared to wild type (WT) mice [26]. In the same pancreatic islet datasets, we also observed a correlation between higher FHL2 expression and higher HbA1c levels. The purpose of the current study was to determine whether FHL2 genetic loci are associated with the incidence of T2D and various aspects of lipid metabolism in a large multiethnic cohort (HELIUS cohort) and, thus, to further elucidate the role of FHL2 in human T2D and dyslipidemia. Here, we hypothesized that FHL2 SNPs associate with specific markers of glucose and lipid metabolism such as fasting plasma glucose values and plasma TG concentrations in humans.
## 2.1. Baseline Characteristics
The cohort we analyzed in this study consisted of 10,056 subjects of both male and female gender from different ethnic backgrounds including European Dutch, African Surinamese, South Asian Surinamese, Ghanaian, Turkish, and Moroccan. The relative contribution of participants from each ethnic group was unequal in this study. In order of relative size, the largest groups were Moroccan ($30.1\%$), Turkish ($26.2\%$), South Asian Surinamese ($14.9\%$), European Dutch ($12.8\%$), African Surinamese ($11.5\%$), and Ghanaian origin ($4.4\%$) (Table 1). The percentage of males per ethnicity differed across groups, with the highest percentage of males being present within the European Dutch group ($50\%$). Interestingly, we also observed differences in the percentage of T2D individuals within groups, with European Dutch participants having the lowest prevalence ($5.8\%$) and the South Asian Surinamese group having the highest ($21.4\%$). The baseline characteristics of the cohort varied across ethnicities. The mean age was lowest in the Turkish and Moroccan groups (41 ± 12 years and 41 ± 13 years, respectively) and highest in the European Dutch (51.8 ± 13 years) and African Surinamese (52 ± 11 years) groups. Mean BMI also differed across groups, with European Dutch participants having on average lower BMI (25.5 ± 4.4 kg/m2) than Turkish participants (28.5 ± 5.6 kg/m2). South Asian Surinamese participants showed the largest waist-to-hip ratio (WHR), while the Moroccan participants showed the smallest. Fat percentage was lowest in the European Dutch group ($29.4\%$ ± $7.5\%$) and highest in the Moroccan group ($32.8\%$ ± $8.3\%$). Fasting plasma glucose and HbA1c concentrations were lowest in the European Dutch group (5.4 ± 0.8 mmol/L and 36.9 ± 4.9 mmol/mol, respectively) and highest in the South Asian Surinamese group (5.9 ± 1.5 mmol/L and 42.6 ± 10.1 mmol/mol, respectively). Blood TG concentrations were lowest in the Ghanaian group (0.7 ± 0.4 mmol/L) and highest in the Turkish group (1.2 ± 0.9 mmol/L), while the inverse was true for blood HDL-C concentrations. TC and blood LDL-C concentrations were both highest in the European Dutch participants (5.2 ± 1.0 mmol/L and 3.2 ± 0.9 mmol/L) and lowest in the Moroccan participants (4.6 ± 0.9 mmol/L and 2.9 ± 0.8 mmol/L), respectively (Table 1).
## 2.2. FHL2 Genetic Polymorphism Distribution
A schematic representation of the FHL2 gene with the exons (1–6) and introns along with the location of each FHL2 SNP is illustrated (Figure 1). Additionally, the FHL2 polymorphisms with their respective reference and alternative alleles, position within the genome, SNP type classification, and ethnicity-specific allele frequency in this study are indicated (Table 2). The distribution of FHL2 SNP reference allele and alternative allele among the different ethnicities differed substantially in some cases. The Ghanaian subjects demonstrated the highest prevalence of the reference alleles for SNPs rs11124029, rs3087523, rs2278502, rs257678, rs880427, rs2376740, rs4851770, and rs7583367. In contrast, the Ghanaian group also showed the lowest proportion of the reference allele for SNP rs2278501, rs4640402, and rs6750100 compared to the other ethnicities. The alternative allele for the missense SNP rs137869171 leading to an Asn226Lys amino-acid change was only present within the European Dutch and Moroccan groups. Of the 19 FHL2 genetic polymorphisms that we evaluated, rs11124029 and rs3087523 lead to synonymous polymorphisms which do not alter the amino-acid sequence of the resulting protein.
## 2.3. Associations between FHL2 SNPs and Lipid Metabolism and Glucose Tolerance
Univariate analysis of FHL2 SNPs with age, gender, BMI, and ancestry as covariates showed nominally significant associations between FHL2 SNPs alternative alleles and plasma TG, HDL-C, LDL-C, and TC concentrations (Figure 2). The rs11124029 SNP was associated with a decreased HDL-C concentration ($$p \leq 0.045$$, beta = −0.009). The SNP rs4640402 was associated with a decreased TG concentration ($$p \leq 0.018$$, beta = −0.017), whereas it was associated with an increased HDL-C concentration ($$p \leq 0.025$$, beta = 0.008). On the other hand, the rs880427 SNP was associated with a decreased HDL-C concentration ($$p \leq 0.003$$, beta = −0.011), as well as increased HbA1c ($$p \leq 0.037$$, beta = 0.004). Furthermore, the SNP rs4851770 was associated with both an increased LDL-C ($$p \leq 0.018$$, beta = 0.01) and TC concentration ($$p \leq 0.024$$, beta = 0.006). In addition to our analysis of the complete HELIUS cohort, the multiethnic composition allowed us to evaluate whether the SNP association with the outcomes were similar across ethnic groups. To this end, we stratified our association analysis by ethnicity and conducted the same test with the same covariates in each subset. In the ethnicity-stratified analyses, we saw 31 nominally significant associations. We saw the most associations in the Moroccan group, where two different SNPs were associated with a decreased LDL-C concentration (rs11891016 and rs4851765), and the SNP rs4851770 was associated with an increased LDL-C concentration ($$p \leq 0.015$$, beta = 0.018). This SNP was also associated with an increased TC concentration ($$p \leq 0.006$$, beta = 0.014). In this group, the SNPs rs137869171 ($$p \leq 0.005$$, OR = 3.757) and rs2278501 ($$p \leq 0.031$$, OR = 1.224) were associated with an increased risk of T2D. In this group, we additionally saw that the SNP rs3087523 was associated with a decreased TG concentration ($$p \leq 0.047$$, beta = −0.047). Lastly, we also observed that rs2376740 was associated with a decrease in HDL-C concentration ($$p \leq 0.04$$, beta = −0.013).
Interestingly, in the African Surinamese group, the SNP rs3087523 was associated with decreased HDL-C ($$p \leq 0.035$$, beta = −0.067), and the SNPs rs1914748 and rs11884297 were associated with decreased TG concentrations ($$p \leq 0.047$$, beta = −0.047 and $$p \leq 0.041$$, beta = −0.044). Furthermore, rs2576778 was associated with an increase in LDL-C ($$p \leq 0.042$$, beta = 0.045). Additionally, the SNPs rs11124029 ($$p \leq 0.021$$, beta = 0.025) were associated with increased HbA1c. Lastly, rs118884297 was associated with a decrease in TC ($$p \leq 0.015$$, beta = −0.023), as well as plasma glucose ($$p \leq 0.043$$, beta = −0.021). In the Ghanaian group, we found that the SNP rs2278501 was associated with an increased risk of T2D ($$p \leq 0.027$$, OR = 1.61), and the SNP rs880427 was associated with a decreased HDL-C concentration ($$p \leq 0.002$$, beta = −0.071). On the other hand, the rs11884297 SNP was associated with an increased HDL-C concentration ($$p \leq 0.009$$, beta = 0.053) and an increased TC concentration ($$p \leq 0.028$$, beta = 0.036). Moreover, the rs4640402 SNP was associated with a decreased HDL-C concentration ($$p \leq 0.035$$, beta = −0.037), an increased TG concentration ($$p \leq 0.001$$, beta = 0.103), and an increased HbA1c concentration ($$p \leq 0.033$$, beta = 0.028). Three SNPs were associated with a decreased TG concentration (rs11891016: $$p \leq 0.02$$, beta = −0.08; rs1914748: $$p \leq 0.007$$, beta = −0.08; rs4851765: $$p \leq 0.018$$, beta = −0.09). Lastly, the SNP rs48511772 was associated with a decrease in HDL-C ($$p \leq 0.046$$, beta = −0.04), and rs880427 was associated with an increase in HbA1c ($$p \leq 0.015$$, beta = 0.04).
In the Turkish group, the SNP rs4851770 was associated with an increased TC concentration ($$p \leq 0.038$$, beta = 0.011). Furthermore, the rs880427 SNP was associated with a decreased HDL-C concentration ($$p \leq 0.02$$, beta = −0.016) and increased HbA1c ($$p \leq 0.032$$, beta = −0.008). In the European Dutch group, SNP rs2576778 was associated with an increase in HDL-C concentration ($$p \leq 0.044$$, beta = 0.024). Only two of the nominally significant associations passed multiple testing adjustments, namely, the association of the SNP rs4640402 ($$p \leq 0.002$$, beta = 0.103) with increased TG and the association of the rs880427 ($$p \leq 0.003$$, beta = 0.07) with decreased HDL-C concentrations in the Ghanaian population. All FHL2 SNP associations listed here are indicated in Table 3 and Supplementary File S1.
## 3. Discussion
In this study, we elucidated the associations between several FHL2 SNPs and multiple parameters of lipid metabolism including TG, HDL-C, LDL-C, and TC, as well as T2D status, HbA1c, and glucose concentrations, in the HELIUS cohort. In addition, this is one of the first studies to make use of the genotype data available from the HELIUS cohort and investigate the novel metabolism-related gene FHL2 and its polymorphisms in a multiethnic setting. In doing so, we illustrate for the first time the association of several FHL2 polymorphisms with plasma lipid concentrations and hyperglycemia. We also demonstrate that there appears to exist not only concordant but also opposing associations of these SNPs with outcomes between ethnic groups.
We identified the SNP rs4851770 to be associated with increased LDL-C and TC concentrations in the complete cohort, as well as in the Turkish group, and with increased TC concentration in the Moroccan group. The SNP rs2278501 was associated with an increased risk of T2D in the Ghanaian and Moroccan groups. Lastly, rs880427 was associated with a decreased HDL-C concentration in the total cohort and in the Ghanaian and Turkish groups.
On the other hand, we also identified seemingly opposing effects. The SNP rs3087523 SNP was associated with a decreased HDL-C concentration in the African Surinamese, but with a decreased TG concentration in the Moroccan group. The power of our associations varied greatly, from $5\%$ to $89\%$, with a mean power of $53\%$. If considering all association tests, the power varied from $5\%$ to $89\%$, with a mean power of only $12\%$. This is likely a reflection of the still rather small sample size in this cohort and suggests that, in addition to environmental factors, multiple SNPs may be involved in driving these phenotypes. Thus, a validation study in a larger sample size could elucidate whether these associations are robust. Furthermore, while various lipid measurements were performed in this cohort, these were by no means exhaustive and did not include, for example, ceramides or plasmalogens. Subsequent studies in large multiethnic cohorts will benefit greatly from including a more exhaustive lipid panel.
Both T2D and dyslipidemia are complex metabolic disorders that affect large portions of the global population and are associated not only with one another, but also with other metabolic diseases. In this study, we aimed to uncover the link between FHL2 genetic polymorphisms and dyslipidemia, as well as T2D, using the large multiethnic HELIUS cohort. FHL2 is still a relatively unknown gene in the field of metabolism with currently only a handful of publications. Given that FHL2 has been mechanistically associated with insulin secretion [24,26], diabetic kidney disease [25], and obesity [27], and that SNPs and epigenetic changes in FHL2 are associated with T2D [24] and body fat mass [28], we hypothesized that FHL2 genetic variants may also be associated with specific markers of glucose and lipid metabolism such as fasting plasma glucose values and plasma TG concentrations in humans.
We focused on T2D-related parameters such as plasma glucose and HbA1c concentrations as previous work by our group showed a correlation between FHL2 expression and HbA1c levels [26]. While we did not observe any significant associations between the FHL2 SNP variants and plasma glucose concentrations, we did uncover nominally significant associations with blood TG, HDL-C, LDL-C, and TC concentrations, as well as with T2D status and HbA1c concentration. FHL2 SNPs were associated with a pro-diabetogenic lipid profile with elevated LDL-C and TC, as well as decreased HDL-C. However, some FHL2 SNPs were also associated with increased HDL-C and decreased TG. This is interesting as we recently elucidated the protective role of FHL2 deficiency against developing obesity in mice and highlighted the association between FHL2 expression and browning of white adipose tissue in humans [27]. Adipocytes also regulate serum TG and HDL-C. Considering that FHL2 expression plays a role in adipocyte phenotype in mice and potentially in humans, and that adipocytes regulate serum TG and HDL-C, genetic variants in the FHL2 gene may also affect blood TG and HDL-C concentrations in humans.
Of the 19 FHL2 genetic polymorphisms that we evaluated, rs11124029 and rs3087523 lead to synonymous polymorphisms which do not alter the amino-acid sequence of the resulting FHL2 protein, while rs137869171 does lead to a missense polymorphism that alters the amino-acid sequence of FHL2 (Asn226Lys). FHL2 is composed of nine zinc fingers, and this variation is located in the eighth zinc finger, changing a polar uncharged amino acid into a positively charged amino acid, which may have functional consequences for the protein that are at present unknown. In our analyses, however, we only observed a nominally significant association between rs137869171 and an increased risk of T2D in the Moroccan group. The remaining FHL2 genetic loci were located in noncoding regions such as introns and intergenic regions upstream of FHL2.
Our results showed that only two of the nominally significant associations passed multiple testing adjustments, namely, the association of SNP rs4640402 with increased TG and the association of SNP rs880427 with decreased HDL-C concentrations in the Ghanaian population. These SNPs highlight the potential contribution of FHL2 to the risk of developing pro-diabetogenic lipid profile in this group. Our results within the Ghanaian group present similarities with previous work, which demonstrated the impact of a pro-diabetogenic polymorphisms in Japanese men associated with increased susceptibility to T2D [29]. However, whether the FHL2 SNP associations we highlight here in the Ghanaian group are truly causal requires further inquiry. *The* genetic variation within intronic and intergenic regions may still have functional implications for FHL2 expression through the regulation of alternative splicing, in addition to affecting promoter and enhancer regions upstream of FHL2. In addition, it has also been demonstrated that synonymous polymorphisms may elicit non-neutral effects in mRNA gene expression and, thus, negatively impact the organism in which they occur [30]. However, this would still need to be studied in further detail and is currently beyond the scope of this study.
In conclusion, our data indicate a link between FHL2 polymorphisms and dyslipidemia that is dependent on ethnic differences between individuals but does not occur through an effect on glucose metabolism. This was most clearly visible in the Ghanaian group after correcting for multiple testing. Given the vast array of targets that FHL2 can bind to, as well as recent publications demonstrating its role in metabolism, it stands to reason that we do not yet fully understand the role of FHL2 in metabolism or the underlying mechanisms such as genetic variation that determine its expression and function.
## 4.1. Population
The Healthy Life in an Urban Setting (HELIUS) study is a large multiethnic cohort study conducted in Amsterdam, the Netherlands, from which data was collected from January 2011 to November 2015; this study was described in detail elsewhere [31,32]. Briefly, the cohort contains individuals of European Dutch, South Asian Surinamese, African Surinamese, Ghanaian, Turkish, and Moroccan descent ranging from 18 to 70 years old, living in/near Amsterdam. Potential participants were sampled with a simple random sampling method from the municipality registry, after stratification by ethnicity as defined by registered country of birth.
The complete study population consisted of 24,789 participants of European Dutch ($$n = 4671$$), South Asian Surinamese ($$n = 3369$$), African Surinamese ($$n = 4458$$), Ghanaian ($$n = 2735$$) Turkish ($$n = 4200$$), Moroccan descent ($$n = 4502$$), and unknown Surinamese or unknown descent ($$n = 854$$), of which a subset had whole-genome genotyping data used for further analysis in this study [28,31]. Specifically, we analyzed the data of 10,056 individuals from the subset of the HELIUS cohort with genotyping data from the six largest ethnic groups, which equated to 1286 European Dutch, 1502 South Asian Surinamese, 1156 African Surinamese, 445 Ghanaian, 2636 Turkish, and 3031 Moroccan. In the total results, all subjects were included.
The study protocols were previously approved by the Amsterdam Medical Center ethical review board, and all participants provided written and informed consent. Ethnicity was defined by the country of birth of the participants, as well as that of their parents. The exact distinction of ethnicity in the HELIUS cohort was also described more extensively elsewhere [31]. Briefly, subjects were classified as European Dutch if they were born in the Netherlands and if both parents were also of European Dutch origins. All non-European Dutch participants in this study were classified on the basis of whether they were born outside of the Netherlands and had at least one parent who was also born outside of the Netherlands, or they were born in the Netherlands but both parents were born elsewhere. A limitation of the country of birth indicator for ethnicity is that people who were born in the same country might have a different ethnic background, which, in the Dutch context, applies to the Surinamese population (Table 1). Therefore, after data collection, participants of Surinamese ethnic origin were further classified according to self-reported ethnic origin (obtained by questionnaire) into ‘African’, ‘South-Asian’, ‘Javanese’, or ‘other’. The homogeneity of each ethnic group was demonstrated previously for the genome [33], microbiome [34], and diet [35].
## 4.2. Phenotypical Assessments
Participants completed a structured questionnaire with records on demographic, socioeconomic, and health-related behavior. Height measurement was performed without shoes with SECA 217 stadiometer to the nearest 0.1 cm. Weight was measured without shoes and in light clothing with SECA 877 scales to the nearest 0.1 kg. Body mass index (BMI) was determined by dividing measured body weight (kg) by height squared (m2). Fasting blood samples were drawn, and plasma samples were used to determine the concentration of glucose by spectrophotometry, using hexokinase as the primary enzyme (Roche Diagnostics, Tokyo, Japan). In this study, we defined individuals suffering from T2D according to whether they self-reported as such, had increased fasting glucose (≥7 mmol/L), or used glucose-lowering medication. Blood samples were drawn from all participants in a fasted state (>8 h of fasting). Serum TG, total cholesterol (TC), HDL-cholesterol (HDL-C), glucose, and LDL-cholesterol (LDL-C) concentrations were measured/calculated from plasma samples, while whole blood was used to determine hemoglobin A1C (HbA1c) concentrations as described previously, using an in-house assay [36]. The continuous measurements were log10-transformed prior to association testing.
## 4.3. Genotyping and Polymorphism Quality Control
Genotyping of HELIUS participants was performed as described elsewhere [37,38]. After the original quality control, the autosomal chromosomes were imputed using the Sanger Imputation Service (https://imputation.sanger.ac.uk, accessed on 1 May 2021). Phasing was performed with EAGLE2 and the PBWT method using the HAPLOTYPE Reference Consortium (release 1.1). Thereafter, poorly imputed SNPs were filtered using a 0.5 imputation quality score cutoff. All chromosome locations are based on the GRCh37 coordinates. Additional quality control was performed using PLINK version 1.9 (the following parameters for quality tests were used: --geno 0.05 --mind 0.05 --indep-pairwise 50 5 0.5 --genome --min 0.1875 --hwe 0.00001). The following SNPs were directly genotyped on the array, whereas the rest were imputed: rs880427, rs1914748, and rs6750100. In the full cohort, SNP variant rs137869171 had a minor allele frequency (MAF) <$1\%$, while variants rs2278502, rs2376740, and rs11891016 were in linkage disequilibrium. Despite this, all FHL2 genetic polymorphisms were used for further analysis. No quality issues were observed for any of the participants. All FHL2 SNP variants underwent quality control per ethnic group for the ethnicity-specific analyses. For the European Dutch group, all variants passed the MAF threshold of $1\%$ with rs2278502, rs2376740, and rs11891016 being in LD. In the South Asian Surinamese group, all variants passed MAF $1\%$ with only rs2278502 and rs11891016 being in LD. All variants in the African Surinamese group passed MAF $1\%$. In the Ghanaian group, rs137869171 did not meet the criteria of MAF $1\%$ and was in LD alongside rs11891016. For the Turkish group, all FHL2 SNP variants met the MAF $1\%$ criteria while rs2278502, rs11891016, and rs2376740 were in LD. Lastly, in the Moroccan group, all SNP variants met the criteria of MAF $1\%$; however, the variants rs2278502, rs2376740, rs11891016, and rs11124029 were in LD. |
# Trends of Antidiabetic and Cardiovascular Diseases Medication Prescriptions in Type 2 Diabetes between 2005 and 2017—A German Longitudinal Study Based on Claims Data
## Abstract
Background: With an attempt to understand possible mechanisms behind the severity-dependent development of type 2 diabetes (T2D) comorbidities, this study examines the trends of antidiabetic and cardiovascular diseases (CVD) medication prescriptions in individuals with T2D. Methods: The study is based on claims data from a statutory health insurance provider in Lower Saxony, Germany. The period prevalence of antidiabetic and CVD medication prescriptions was examined for the periods 2005–2007, 2010–2012, and 2015–2017 in 240,241, 295,868, and 308,134 individuals with T2D, respectively. ( Ordered) logistic regression analyses were applied to examine the effect of time period on the number and prevalence of prescribed medications. Analyses were stratified by gender and three age groups. Results: The number of prescribed medications per person has increased significantly for all examined subgroups. For the two younger age groups, insulin prescriptions decreased but those of non-insulin medications increased, while both increased significantly over time for the age group of 65+ years. Except for glycosides and antiarrhythmic medications, the predicted probabilities for CVD medications increased over the examined periods, with lipid-lowering agents demonstrating the highest increase. Conclusions: Results point towards an increase in medication prescriptions in T2D, which is in line with the evidence of the increase in most comorbidities indicating morbidity expansion. The increase in CVD medication prescriptions, especially lipid-lowering agents, could explain the specific development of severe and less severe T2D comorbidities observed in this population.
## 1. Introduction
Temporal change in morbidity has been of a high concern due to its solid effects on health policy planning and public health programming [1]. While non-communicable diseases have become the leading cause of premature morbidity globally [2], type 2 diabetes (T2D) has reached alarming levels due to its increasing prevalence and associated quality of life impairment [3].
In Germany, research suggests that morbidity in the context of T2D is expanding. As well as the fact that prevalence rates of T2D have been increasing in Germany [4], the age at onset has been shown to be declining among younger individuals [5]. On the other hand, life expectancy for individuals with T2D has been progressively increasing between the years 2005–2014 [4], indicating more years lived with the disease. Adding to that, the extra years lived with T2D are associated with more comorbidities [6]. Based on a large population of a health insurance provider in the state of Lower Saxony, Germany, our previous research has examined the development of comorbidities in individuals with T2D between the years 2005 and 2017. It indicated that individuals with T2D have significantly elevated risks of having more comorbidities diagnosed in the time period 2015–2017 compared to 2005–2007 [7]. Moreover, our study showed that the prevalence of severe adverse cardiovascular (CVD) events, such as myocardial infarction (MI) and stroke, has either remained constant for some of the age and gender groups examined, or slightly decreased over time for the other age and gender groups. However, at the same time, a clear and substantial increase was observed in other CVD comorbidities that are counted as risk factors, such as hypertension, cardiac insufficiency, and hyperlipidemia among the men and women of all age groups examined [7]. The study also reported a significant increase in the risk of having other vascular diseases, such as retinopathy, nephropathy, and polyneuropathy. Accordingly, it was concluded that the extra years lived with T2D are spent with more comorbidities, which signifies a deterioration in the quality of life and, thus, indicates an expansion of morbidity in the population of individuals with T2D.
However, the mechanisms behind the different development patterns of CVD comorbidities in individuals with T2D depending on severity remains unclear. It can be hypothesized that as a result of new treatment guidelines [8], different medication prescription practices have been developing, leading to the postponement of severe adverse health events. At the same time, deterioration of lifestyle risk factors [9] might lead to an increase in the prevalence of other chronic comorbidities, despite better treatment and diagnoses. This study will focus on the first premise of the abovementioned hypothesis through exploring temporal trends of medication prescriptions in T2D.
Studies from Europe reported evidence on an increase in the number of medications prescribed per patient [10] and polypharmacy [11] during the last two decades. Nevertheless, while studies examining time trends of antidiabetic medication use in Germany exist [12,13,14], evidence on the time trends of specific medication groups aimed at managing diabetes as well as the comorbidities accompanying it is scarce. Moreover, considering gender and age differences in the management of T2D is essential for understanding mechanisms that lie behind the morbidity development patterns in specific subgroups.
Based on the same data used in our previous research to examine the development of comorbidities in T2D [7], this study aims to explore possible mechanisms behind the severity-dependent developmental trends of T2D comorbidities through examining the gender- and age-stratified development of medication prescriptions in T2D. We hypothesize that individuals with T2D have been more routinely medicinally treated over time between the years 2005 and 2017, leading to the delay of severe CVD events. In order to examine this hypothesis, the following research questions will be addressed:How has the number of prescribed medications per person been developing between the years 2005 and 2017 in individuals with T2D?How has the prevalence of antidiabetic and CVD medication prescriptions been developing between the years 2005 and 2017 in individuals with T2D?
## 2.1. Data
The database for this study is anonymized claims data of individuals insured by “AOKN: Allgemeine Ortskrankenkasse Niedersachsen” that cover the years 2005–2017. AOKN is a large statutory health insurance provider in the state of Lower Saxony, Germany which insures around one-third of the population in this state [15]. Given that health insurance is mandatory in Germany, about $90\%$ of the population are statutory insured, with insurance premiums defined individually based on income [16]. All individuals in the statutory health insurance system receive the same health care coverage. The datasets include demographic information, in- and outpatient diagnoses, medical prescriptions and medical treatments. The data are currently available for the years from 2005 to 2017, allowing for a longitudinal analysis of diagnoses and prescribed treatments in this time period. The scientific use of the pre-existing anonymized claims datasets is regulated by German law in the German Civil Code “Bürgerliches Gesetzbuch”. The data protection officer of the Local Statutory Health Insurance of Lower Saxony-AOK Niedersachsen where the headquarter is located in Hannover, Germany) has given permission to use them for scientific purposes. Therefore, no ethical approval was required for this study.
## 2.2. Definition of T2D Cases and Medications
The population of this study includes all insured individuals with T2D aged 18 years and older. T2D is defined based on individual diagnosis data, based on the German version of the International Classification of Diseases (ICD-10 GM) and on medication data. The exact definition and plausibility mechanism of this definition have been described in an earlier publication [7].
Medication groups were identified according to the anatomical therapeutic chemical classification (ATC) with daily doses defined for the German pharmaceutical market [17].
After consulting clinicians, 12 discrete diabetes and CVD medication groups were first identified, namely insulin, non-insulin, blood thinning medications, vasodilators, diuretics, beta blockers, calcium channel blockers, renin–angiotensin agents, glycosides, antiarrhythmic and antiadrenergic agents. The corresponding ATC codes of the 12 medication groups are presented in the Supplementary Materials in Table S1. Then, the discrete CVD medication groups were simplified into four major medication groups to be used in the analyses: [1] antihypertensive agents (beta blockers, calcium channel blockers, diuretics, renin–angiotensin agents, vasodilators, and antiadrenergic agents), [2] lipid-lowering agents, [3] blood thinning medications, and [4] glycosides and antiarrhythmic medications.
In order to avoid overestimation of prescription prevalence, prescriptions were only considered plausible if they were present for individuals at least twice in each time period examined, with the exception for individuals who were insured for only one quarter in the corresponding period.
## 2.3. Time Period
In this study, the trend of medication prescriptions was examined over three time periods between 2005 and 2017, which are the years for which the data are currently available, with equal intervals and gaps in-between. The three time periods were 2005–2007 (p1), 2010–2012 (p2), and 2015–2017 (p3). In order to limit bias, T2D, as well as all medication prescriptions, were newly defined in each period using the same criteria, allowing for the same potential errors and, thus, improving comparability among the time periods. The time periods approach was used in order to better illustrate clear directions of temporal development. Two-year gaps were left between the three time periods to provide sufficient time for possible changes in morbidity and prescription frequency to happen.
## 2.4. Statistical Analysis
In order to detect age and gender differences in the trends of medication prescriptions in individuals with T2D, all analyses in this study were applied separately for men and women, and for three age groups: 18–45 years, 46–64 years, and 65+ years.
Period prevalence rates of the single medication groups in the three examined time periods were calculated for all subgroups and are displayed in the Supplementary Materials in Table S2. Denominators are based on the aggregated insurance duration in each period in terms of person-years to correct for censoring that might result due to different observation periods.
## 2.4.1. Trend of the Number of Prescribed Medications
The number of prescribed discrete medication groups (that could range between 0 and 12) was grouped into the following categories: “0 Medications”, “1–2 Medications”, “3–4 Medications” and “5+ Medications”. Ordered logistic regression was applied to examine the effect of time period on the number of medications. Separate models were created for each of the age and gender groups, resulting in six models (model 1: men, 18–45 years; model 2: men, 46–64 years; model 3: men, 65+ years; model 4: women, 18–45 years; model 5: women, 46–64 years; model 6: women, 65+ years). In each model, the outcome or dependent variable was the number of prescribed medications with its four above described categories. The main independent variable was the time period with its three categories: p1 (2005–2007), p2 (2010–2012), and p3 (2015–2017), with p1 being the reference group. Age (as a metric variable, displaying the age of individuals within each age subgroup) and duration of observation (days of observation or insurance within each time period) were added as covariates to all models to adjust for their influence.
Cluster-robust standard errors were used in all models in order to correct for the possible effects of within cluster variation due to having individuals in more than one period, which can lead to autocorrelation.
Ordered logistic regression provides an odds ratio (OR) indicating the odds of being one category higher in the outcome (category of number of medications) for the examined group (p2 or p3) compared to the control group (p1). Even though time period is treated as a predictor variable in the regression models, the aim is to examine the trends of medication prescriptions by interpreting the odds of having a higher number of prescribed medications over time, i.e. in p2 and p3 compared to p1. No additional potential influencing factors were added to the models to concretize the effect of time period within the context of temporal development.
## 2.4.2. Trend of the Prescription Prevalence
Logistic regression analyses were applied to examine whether there was a significant change in the prevalence of the prescribed medication groups over the three time periods. Cluster-robust standard errors were used to correct for within cluster variation.
In this line of analysis, the outcome or dependent variables were the six medication groups, namely [1] insulin, [2] non-insulin antidiabetic medications, [3] antihypertensive agents, [4] lipid-lowering agents, [5] blood thinning medications, and [6] glycosides and antiarrhythmic medications. Each of these dichotomous outcomes had two categories, yes/no, where “yes” implies having medications prescribed from the corresponding medication group within the examined time period. For each of these outcomes, separate logistic regression models were applied for each gender and age groups, resulting in six models per outcome. Similar to the abovementioned analyses of the trend of the number of prescribed medications, the main independent variable was time period, and age within each age category and duration of observation were adjusted for in all models.
Since odds ratios tend to either overestimate (if OR > 1) or underestimate (if OR < 1) effects when dealing with outcomes of more than a $10\%$ prevalence rate [18], prevalence ratios (PR) were calculated instead in this analysis.
## 2.4.3. Predicted Probabilities
Based on the examined regression models described above, predicted probabilities using time period as the main independent variable with margins at means for age and duration of observation were estimated and graphically displayed. Predicted probabilities provide the possibility of interpreting the results more accurately than prevalence rates because they display adjusted effects [19].
The software STATA v15.1 was used for all statistical analyses in this study.
## 3. Results
This study involved 240,241, 295,868, and 308,134 individuals with T2D over the three time periods 2005–2007, 2010–2012, and 2015–2017, respectively. The distributions of age, gender, and insurance durations are presented in Table 1.
## 3.1. Number of Medications
In men, the number of prescribed medications increased over the three periods among all examined age groups. The predicted probability of having no medications prescribed decreased by up to $3\%$ for the youngest age group, while that of taking five or more medications increased by up to $9\%$ for the oldest age group. Nevertheless, the differences were mostly apparent between the first two time periods, while the change between the periods 2010–2012 and 2015–2017 was minimal (Figure 1).
In women, the change in the number of prescribed medications was only present for the age group of 65+ years. While women in this age group had a slightly lower probability of having only one–two medications prescribed, the probability of having five or more medications prescribed was up to $6\%$ higher during the latest period (Figure 1).
The ordered logistic regression analysis showed that in men, the probability of having at least one more medication prescribed significantly increased over time for all age groups, while it only significantly increased for the age group of 65+ years in women. Men aged 18–45 years were $16\%$ and $21\%$ more likely to have one additional agent prescribed if they were in p2 and p3, respectively (compared to p1). While these probabilities were slightly higher for the middle age group ($18\%$ and $22\%$ for p2 and p3, respectively), they were more pronounced for the age group of 65+ years, where men were $30\%$ and $40\%$ more likely to have at least one additional prescribed medication in p2 and p3 respectively. Though significant, the increase was less pronounced for this age group in women, where the odds increased by $21\%$ and $24\%$ for p2 and p3, respectively (Table 2).
## 3.2. Antidiabetic Medications
In both men and women, the predicted probabilities of having insulin prescribed decreased by $4\%$ for the youngest age group, while the predicted probability of prescriptions entailing non-insulin antidiabetic medications increased considerably, where they were $14\%$ and $6\%$ higher in p3 compared to p1 for men and women, respectively. Though less pronounced, the change in predicted probabilities for insulin and non-insulin medications exhibited similar attitudes for the middle age group. For the oldest age group, however, the predicted probabilities increased for both insulin and non-insulin antidiabetic medications in men but remained almost unchanged in women (Figure 2).
The logistic regression analysis showed that being in the second or the third time periods was significantly associated with a higher chance for non-insulin prescriptions and a lower chance for insulin prescriptions among the youngest and the middle age groups. In the oldest age group, the chances of having both insulin and non-insulin medications prescribed were significantly higher in men, while, in women, only non-insulin prescriptions increased significantly over time (Table 3).
## 3.3. CVD Medications
While the prescriptions of antiarrhythmic medications and glycosides were minimal for the two younger age groups in p1, their predicted probabilities slightly decreased over time. For the oldest age group, the predicted probabilities of having prescriptions from this medication group was $15\%$ in men and $18\%$ in women in p1, but these probabilities decreased by more than a half in p3 (Figure 3). These results were also shown in the logistic regression analyses, with significant reductions in the PRs for most of the age and gender subgroups examined (Table 3) In lipid-lowering and blood thinning medications, there was barely any change in the predicted probabilities for the youngest age group. For the two older age groups, however, there was a clear increase in the predicted probabilities for lipid-lowering and blood thinning medications being prescribed, with the oldest age group being the most affected. While this applied for both genders, men had higher probabilities than women in all three time periods (Figure 3). These conclusions were mostly reproduced through the logistic regression analyses, where it was shown that men and women aged 65 years or older had an up to $38\%$ and $60\%$ higher chance of having lipid-lowering agents prescribed in p2 and p3, respectively. They also had 9–$11\%$ and 23–$29\%$ higher chances for blood thinning medications in p2 and p3, respectively (Table 3).
The predicted probabilities for antihypertensive agents were the highest among all medication groups examined. Approximately a third of the individuals in the youngest age group had antihypertensive medications prescribed in p1, with an increase by a few percentage points in p2. In the middle age group, $69\%$ of men and three-quarters of women were predicted to have had prescriptions from this medication group in p1. While this probability remained almost constant in women, it increased by a few percentage points in p2 in men. In the oldest age group, there was also a slight increase in the predicted probabilities in p2, but these started off in p1 with $87\%$ and $91\%$ in men and women, respectively. Among all the subgroups, almost no change appeared between p2 and p3 (Figure 3). The logistic regression analyses showed a slight but significant increase in the chance of having this medication group prescribed for all age groups in men and the oldest age group in women (Table 3).
## 4. Discussion
In an attempt to understand possible mechanisms behind different patterns of morbidity expansion in the context of T2D, this study examined the temporal development of medication prescriptions in men and women with T2D.
Overall, the study reported an increase in the number of discrete prescribed medications per individual between the time periods of 2005–2007 and 2015–2017. This is in accordance with the finding from our previous study on the development of comorbidities [7], which reported that the number of comorbidities per individual increased over the same time periods in individuals with T2D. The increase in the number of prescribed medications per person with T2D is also in line with evidence from other European studies. Higgins et al reported a significant increase in the number of prescribed medical agents per person between the years 2000–2015 [10]. Similarly, Oktora et al. reported an increase in “polypharmacy” in individuals with T2D between the years 2012 and 2016 [11]. Nevertheless, while the use of multiple medications can be essential for the treatment of diabetes and its comorbidities, the increase in the number of medications prescribed per person can be associated with related adverse effects and a higher risk for potentially inappropriate medication [11,20].
The trend of the prevalence of antidiabetic medications overall remained constant in women but increased slightly in men. When splitting this group, different attitudes could be observed, pointing towards a decrease in insulin but an increase in non-insulin prescriptions for the two younger age groups. This could be partly attributed to the change in medical therapeutic practices, such as the delay of insulin prescription in individuals with T2D [21,22,23]. A longitudinal study from Germany and UK suggests that the time to insulin therapy as well as the average glycated hemoglobin levels before insulin therapy have increased between 2005 and 2010 [23]. Nevertheless, the increase in the prevalence of non-insulin prescriptions is of a notably higher extent than the decrease in insulin prescriptions in the two younger age groups. In addition, both the prescription prevalence of insulin and non-insulin medications increased for the older age group (65+ years). While this might partly be the result of earlier detection of T2D, it also signposts a deterioration in the management of T2D and an expansion of morbidity in this population, despite changes in prescription practices. Evidence from our previous research which was carried out on the same study population suggests that in the age group of 65+ years, there has been a marked increase in the prevalence of diabetes-related nephropathy [7], for which non-insulin therapy is counter-indicated [24]. Individuals with T2D who suffer from this complication are, thus, left with the only choice of insulin therapy. This in turn also reflects an expansion of the morbidity level, especially among the age group of 65+ years.
Except for glycosides and antiarrhythmic medications that have been prescribed less frequently, possibly due to potential side effects [25], and the existence of medical alternatives, the predicted probabilities as well as the odds for having CVD medications prescribed increased for the two older age groups. Although studies that examined trends for the use of CVD medications in T2D are limited, results from the available evidence from Germany [14,26], as well as other countries, such as Taiwan [27] and the USA [28], designate a similar conclusion based on the trend of CVD medication prescriptions in T2D. Evidence from two German studies indicates that the proportion of individuals with T2D who receive antihypertensive and lipid-lowering medications have increased between 2000 and 2007 [14] and between 1990 and 2011, [26] respectively. The increase in the prevalence of the prescription of CVD medications is consistent with the manifest increase in the prevalence of CVD comorbidities (hypertension, hyperlipidemia, and cardiac insufficiency) that was observed in our previous research carried out on the same population of the current study [7], which reflects a higher morbidity level in this population. Moreover, research also indicates a temporal increase in the prevalence of CVD risk factors in individuals with T2D, such as obesity [29], which can also explain the higher prescription rates of related CVD medications in this population. Nonetheless, changes in medical practices could still be partly responsible for the trend of some CVD medication prescriptions, such as lipid-lowering agents, which had the most pronounced increase among all CVD medication groups between p1 and p3 for both men and women. In 2016, the European Society of Cardiology (ESC) reduced target levels for low-density lipoprotein cholesterol (LDL-C) levels circulating in the blood and, thus, the level from which lipid-lowering medications would be prescribed was decreased [30]. The more recent ESC guidelines from 2019 recommend even lower target blood LDL-C levels [31], which forecasts that apart from the increasing morbidity, a higher prevalence of prescriptions would be potentially observed in future studies that consider later periods.
The increase in the prescription of CVD and antidiabetic medications in T2D could, thus, provide a possible explanation for the different patterns of development of CVD comorbidities in T2D, depending on severity. Our previous findings suggest that the development of severe CVD comorbidities, such as MI and stroke, either remained constant or decreased for some examined subgroups between 2005 and 2017. On the other hand, the predicted probabilities of other comorbidities, such as hypertension and hyperlipidemia that also act as risk factors for MI and stroke, increased markedly and significantly among almost all examined subgroups [7]. It was, thus, hypothesized that differences in medication prescription practices could be associated with the delay of serious health events, such as MI and stroke, in individuals with T2D. The results of the present study support this hypothesis and could be interpreted as an improvement in the medicinal management of risk factors in terms of medication prescription practices, thus, delaying serious health events. The results also reapprove that morbidity is expanding in the population of individuals with T2D. Nevertheless, it still remains an open question whether the expansion of morbidity in T2D in terms of a higher risk of milder CVD comorbidities over time is due to the temporal increase in lifestyle risk factors. Research suggests that lifestyle modification could be more effective than medications in the management of CVD in T2D [32]. The results of a German longitudinal study that examined the development of cardio-metabolic risk factors between the years of 1990 and 2011 suggest that while the prevalence of using antidiabetic, lipid-lowering, and antihypertensive medications increased significantly, there was a simultaneous significant increase in the prevalence of smoking and obesity [26]. Thus, the increase in milder CVD comorbidities that are associated with a deterioration of quality of life in individuals with T2D [29] could be a result of an increase in lifestyle risk factors, such as unhealthy eating habits [33] and lack of adequate physical activity [34], despite the existence of the Disease Management Program (DMP) [35] and a better adherence to the guidelines of the DMP over time [36].
## 5. Strengths and Limitations
The database for this study is routine data of a large population of statutory insured individuals in the state of Lower Saxony, thus, providing adequate power. All medication prescription information is available, ruling out any recall or selection bias. One limitation of the study is that information about the actual intake of medications, and not just prescription information, is not available. However, there is no clear evidence on the existence of temporal differences in the medication adherence in T2D, which makes the three periods comparable in terms of the proportion of individuals who actually took the medications after buying them. Moreover, since the study aims to discuss results in the context of morbidity development, trends of medication prescriptions would presumably reflect how the “need” for these medications have been developing. In addition, certain medication combinations, and not only the number of medications, can be relevant in terms of morbidity development in T2D. However, this was not considered due to the scope of the paper, and will be addressed in future studies. Additionally, the results are not fully generalizable to all individuals with T2D in Germany since the socioeconomic distribution of AOKN differs to some extent from the general population [37].
## 6. Conclusions
This study provides evidence for the temporal increase in the prevalence of medication prescriptions in T2D. The results of this study support the hypothesis of morbidity expansion in the population of T2D. The increase in CVD medication prescriptions, especially lipid-lowering agents, could explain the severity-dependent developmental pattern of T2D comorbidities. Further investigations are planned to examine the temporal development of lifestyle risk factors in T2D to provide a better understanding for the mechanisms behind morbidity expansion in this population. |
# mHealth Technology as a Help Tool during Breast Cancer Treatment: A Content Focus Group
## Abstract
Purpose: To assess the usability and preferences of the contents of mHealth software developed for breast cancer patients as a tool to obtain patient-reported outcomes (PROMs), improve the patient’s knowledge about the disease and its side effects, increase adherence to treatment, and facilitate communication with the doctor. Intervention: an mHealth tool called the Xemio app provides side effect tracking, social calendars, and a personalized and trusted disease information platform to deliver evidence-based advice and education for breast cancer patients. Method: A qualitative research study using semi-structured focus groups was conducted and evaluated. This involved a group interview and a cognitive walking test using Android devices, with the participation of breast cancer survivors. Results: The ability to track side effects and the availability of reliable content were the main benefits of using the application. The ease of use and the method of interaction were the primary concerns; however, all participants agreed that the application would be beneficial to users. Finally, participants expressed their expectations of being informed by their healthcare providers about the launch of the Xemio app. Conclusion: Participants perceived the need for reliable health information and its benefits through an mHealth app. Therefore, applications for breast cancer patients must be designed with accessibility as a key consideration.
## 1. Introduction
Breast cancer is the most common form of cancer among women [1,2]. In 2018, breast cancer mortality trends decreased by $41\%$ due to various factors, including early diagnosis, advancements in treatment, lifestyle changes, improved nutrition, and research [3,4]. This increased survivorship highlights the importance of focusing on the long-term goals and consequences of treatment to enhance quality of life and overall well-being, promoting a proactive approach to health [5].
Technological developments in recent years have been essential in supporting methodologies for diagnosing health and cancer. These developments include the standardization of portable and wearable devices for data collection and health biomarkers, as well as advances in data analysis through artificial intelligence [6,7,8,9,10].
The use of apps to promote health and well-being has grown exponentially [11]. Smartphones facilitate the creation and development of millions of apps, including communication apps, geolocation with maps, video games, video streaming, and health apps. These can be easily downloaded from app stores and offer low-cost solutions that can be accessed by a large global population.
Using mHealth (mobile Health), also known as health apps, to support breast cancer patients during treatment and post-treatment can be a helpful complement to the usual treatment for patients [12]. mHealth can be an effective tool for obtaining patient-reported outcome measures (PROMs) that reflect patients’ perceptions of their own health. Within the framework of Value-Based Healthcare (VBHC), there is a value change from volume-driven to value-driven care, empowering patients by allowing them to report on their disease-related side effects and quality of life, and reinforcing treatment adherence [13,14,15,16,17,18,19].
PROMs are thought to be central to the understanding of the effectiveness of treatments in cancer [20], improving communication between patients and providers, patient satisfaction [21], daily life [22], and survival [23]. According to Osborn et al., a small number of mHealth applications have been used in clinical studies examining a variety of cancer types and age groups. The studies found that the positive impact was largely limited to improved symptom control, although some studies reported increased symptoms. Data on other outcomes, including health economic measures, were limited [17].
Xemio (www.xemio.org (accessed on 3 March 2023)) is a digital platform that comprises a website, social network, and app, providing access to a virtual environment for meetings, debates, support, and accompaniment. It was developed by Fundación ISYS, with patients as the primary focus, specifically for those with breast cancer. The project has created the Xemio app (Figure 1), an app designed by patients and doctors, and all its content is reviewed and updated by oncology professionals to help patients with their disease self-management and social issues. The platform, built for smartphones, helps patients and their families track side effects and treatments, as well as participate in activities and social events organized by various associations. The Xemio platform is endorsed by the SOLTI scientific societies dedicated to breast cancer research and the Catalan Society of Family and Community Medicine (CAMFiC). It has received support from the “la Caixa” Foundation, a Europe Horizon 2020 grant, and crowdfunding.
In order to assess the patients’ preferences for the prototype of the Xemio app, the research group decided to conduct a focus group as the first step in a series of participatory user-centered activities to develop a mobile app that is well received by patients.
The results presented in this article from the focus group are part of a larger research project. The Xemio app is integrated with the Electronic Medical Record of Hospital Clínic de Barcelona [24]. This will allow oncologists to access and interact with data recorded by patients participating in the pilot study. This integration is part of the European project “Artificial Intelligence Supporting Cancer Patients across Europe” (ASCAPE) (ClinicalTrials.gov Identifier: NCT04879563). ASCAPE aims to identify quality-of-life problems based on PROMs and support treatment recommendations.
In order to establish the design process of the Xemio app, a qualitative observational study design was previously conducted [25,26]. The study design incorporated semi-structured interviews with five patients from a local patient association [27], with the aim of identifying the desired content and features of a mobile app to assist individuals living with breast cancer [28]. The smartphone app prototype was developed with the help of an oncologist and two general practitioners belonging to the research group, and it is based on this prototype that this study was carried out.
## 2. Objectives
The aim of this focus group was to gain an in-depth understanding of the needs of breast cancer patients during treatment and assess the feasibility of a smartphone application prototype developed by a research team of patients and oncology professionals.
## 3. Methodology
Qualitative research methods provide a deeper understanding of social issues. These techniques offer more opportunities for gaining in-depth knowledge about a specific topic compared to quantitative research methods [29,30]. A focus group is a commonly used qualitative research technique [25,26] that does not require extensive resources and enables interactive feedback and suggestions from participants during the sessions. It helps to identify key areas for improvement in a product or service [31]. This study followed the flowchart of steps for conducting a focus group discussion [29].
A focus group session began with the presentation of the content to be discussed after a brief presentation by the moderator. The moderator asked the participants about their experiences.
## 3.1. Patient Identification and Patient Recruitment
Before patient selection, it was decided that the group should be composed of breast cancer survivor patients treated at the Hospital Clinic. None of the patients were on active cancer treatment when the focus group took place. The patients invited to participate were women who represented the prototype patient cases designed for the focus group. These patients represent different age groups, between 50 and 65 years old and over 65 years old, considering that the average age of breast cancer patients is 63 in white women, combining situations of employment or unemployment and living alone or with family. Although there is a generational gap in the use of new technologies, researchers decided to model the focus group, including older adult patients, as a very suitable methodology for marginalized groups [32,33].
Due to the COVID-19 emergency in June 2020, it was challenging to recruit and invite patients to increase participation in the focus group. In accordance with the COVID-19 regulations regarding the gathering of people in enclosed spaces and hospitals, the focus group consisted of 5 participants aged between 52 and 71 years old. The archetypes of the type of patients that were of interest were the following:Patient 50–65 years old, employed;Patient 50–65 years old, unemployed;Patient > 65 years old that lives alone;Patient > 65 years old that lives with family members.
Participant identification was followed by participant recruitment. For the recruitment of participants, the collaborating oncologist drew up a list of possible candidates following the mentioned archetypes as best as possible. The oncologist contacted the candidates over the phone.
## 3.2.1. Data Collection on Patient Information Needs, Services, and Activities (Session I)
The focus group was divided into two sessions. Both sessions occurred on the same day. This first session was done without giving the patients any prior proposals of what they would evaluate in the second session, and it aimed at exploring the immediate impressions that patients have regarding information, services, and activities that they considered helpful during cancer treatment.
During the first part of the focus group, the moderator, an experienced doctor in charge of the Patient Experience department of Hospital Clínic de Barcelona, proposed the topics to be discussed with the participants. The topics of discussion were agreed upon beforehand with two other experts that also acted as observers: an oncologist and an Information Society expert. The topics to be discussed were as follows:Treatment of symptoms;Advice on how to cope with side effects;Services or activities needed throughout the cancer process;News that would be of interest during this period.
The proposed contents of the conversation were elaborated through a thematic study group and collective alignments, considering the participants’ previous experiences. It was planned as a 50 min session.
Data collection in the first session was done by recording an audio tape (and a subsequent transcription), taking notes, and participant observation.
## 3.2.2. Cognitive Walkthrough Test with Users (Session II)
Cognitive walkthrough (CW) is a method of inspecting the usability of an interactive system that focuses on evaluating the ease of learning a new tool [30]. Its purpose is to analyze how a user thinks and behaves when they first use an interface. It is known that if users are given a choice, they prefer to learn based on exploration and observation, rather than reading manuals or following instructions [34].
The patients were allowed to interact with the Xemio app in the second part of the focus group. During this activity, the patients were given a smartphone with the app and commands about what activities to do with the app. Their impressions were collected regarding the content and usefulness of the tool. Data collection in this session was achieved using questionnaires, registering the navigation of the app, and the participant’s observations recorded by the observers.
The research team developed two questionnaires and a user test to evaluate the ease of use, effectiveness, and efficiency of the Xemio application. The session was well defined and guided by “Usability Inspection Methods, Jakob Nielsen, 1994”, taking special care with some golden rules such as one task = one action.
The order of the CW session was as follows:First questionnaire: this was aimed at understanding the degree of literacy the participants had in handling smartphone applications;User test: a selected member of the research group with expertise about the app acted as the facilitator by explaining the tasks to be completed;Second questionnaire: This was aimed at assessing the usefulness and contents of the application.
It also contrasted the answers that emerged from the first part of the focus group. A 50 min session was planned to complete this part.
## 3.3. Venue for the Discussion
The focus group was held in the living lab of the Hospital Clinic, a space dedicated to sharing experiences with patients, called Espai de Intercanvi d’Experiències (EIE) within the Hospital Clínic de Barcelona. The space for the Exchange of Experiences (EIE) is a physical space within the hospital that facilitates reflection, rethinking, and co-creating solutions to improve care services and increase their value from the patient’s perspective.
## 3.4. Data Analysis
The content analysis was carried out by coding the four thematic categories proposed, grouping and classifying the comments as positive and negative, locating the areas of interest, collecting the scores from the questionnaires, and analyzing the fluidity of navigation in the app.
## 4. Results and Reporting
The results are organized into three distinct sections. The first section includes an analysis of the results from the first session of the focus group. The second section focuses on the analysis of the questionnaires and the tasks performed with the Xemio app. Finally, the third section compares the results from both the first and second sessions.
## 4.1. The Capture of Information and Follow-Up Needs
The topic of the first part of the session was symptoms from the treatment (side effects) and their intensity. The conversation focused on treatment effects on body image, such as hair loss, spots on the skin, weight gain, and increased sweating. Afterward, the moderator directly asked about other side effects such as the effect on sleep, sexual life, or nutrition. Three participants pointed out that they experienced a metallic taste when eating food. In addition, some participants pointed out a weight loss at the beginning of the treatment that was recovered later. Topics related to surgery side effects, especially lymphedema (cork-like tenderness in the arm), were also mentioned in the discussion without going into much detail. Finally, two patients were referred for mental focus and memory problems. The focus participants maintained an objective and positive attitude throughout the discussion.
During the session, the moderator collected most of the relevant information in a Metaplan board meeting, which constituted four main topics: symptoms, side effects, services, and news about cancer treatment advancements (Table 1).
## 4.1.1. Textual Phases Catch from Patients
In the second part of the session, the moderator focused on how patients manage the treatments’ side effects. During that section, the participants recalled digestive side effects, nausea, mouth sores, skin burns from radiotherapy, fever, fatigue, and general malaise. However, most patients claimed to have received complete information on managing their symptoms from the hospital oncology staff. The positive perception was that they had been fully informed and had help when needed.
In the third part of the session with the moderator, the patients of the group were asked about which services outside the hospital they used during their treatment and about their participation in activities carried out by patient associations. The first thing the participants mentioned was information of a practical nature to adapt to their new reality, such as the location of stores where they could buy wigs and scarves. The youngest patient admitted searching for terminology on the Internet. One of the older patients explained how she signed up for adult classes at the university. One of the patients expressed that she had attended a patient association session of the “Kálida” space at the Sant Pau hospital in Barcelona. When asked about the reason for not participating in patient association activities, they replied that the hours were unsuitable for them and that they maintained other personal activities.
The fourth section of the discussion with the moderator was about news consumption preferences. Participants were asked about the need for the consumption of specific news. The participants expressed that they thought there was an excess of information on the Internet. Another conversation topic about their cancer was information from conventional media that created false expectations. When asked about topics of interest to generate news, the general agreement was the preference for practical news with content such as nutrition and aesthetics tips and an agenda for group activities.
## 4.1.2. Cognitive Walkthrough Test with Users
After a short break, the second session of the focus group was presented to the patients. This session started with a pre-test to find out the participants’ everyday use of Information Communication Technologies (ICT) resources. The results of this questionnaire are shown in Table 2.
A digital generation gap is visible in the use of tools by age, with the patients of the age group of 50 years being the most likely to use Internet tools and the older patients being less likely to use Internet tools.
## 4.2. Results of the Xemio App User Test
Observational comments on required tasks:Task [1] *Find a* side effect in Xemio: Patients were asked to find and record side effects in the app. P1, P3, and P4 had no difficulty, P2 had many navigation issues, and P5 also had some difficulties. *They* generally believed that navigating the side effects area and move intensity and recommendations could be more intuitive. Patients were looking for specific effects that were not included in the app (i.e. heart side effect) and expected to find a free text field where they could record these side effects; this is a use case we hadn’t developed yet. Task [2] *Register a* treatment in Xemio: Patients were asked to register the Intensity of effects on nails. Participant P5 could not find the option to get to the functionality to select and register an intensity.. General difficulty registering the intensity (it is not intuitive). Participants have problems returning to the previous screen when recording side effects and intensities. The image of the body to record dry skin is very well understood and the body part can be chosen; however, participants have problems understanding how to record the intensity. For example, the head only lets them select moderate intensity. Task [3] Consult information on types of cancer: *There is* confusion between the side menu and the bottom menu. P3 asks if she is able to zoom in. She expresses that the letters and symbols cannot be seen well. Task [4] Register for an event in the social agenda: the moderator decided not to complete this task when she realized that the patients were starting to have difficulty processing more new information and were experiencing difficulties following the pace of the session. Tasks [5] Configure my personal data and [6] Generate PDF document with my histories: it gave errors to some participants when they entered their data; however, they could access my diary.
After the experience of interacting with the Xemio app, participants were asked about their opinion of the application.
To ensure a better representation of opinions by having more options than Yes/No or True/False answers, a seven-point Likert scale questionnaire was designed, providing participants with options to express themselves more accurately and a better representation of their assessment. The seven-point Likert Scale was also chosen to reduce the possibility of random or inconsistent responses and to avoid neutral judgments as occur with five-point Likert Scales. *In* general, the seven-point Likert scale can be a good choice for collecting detailed information about a participant’s evaluation. The results of this questionnaire are shown in Table 3, and the results of the open-ended questionnaire are shown in Table 4.
## 4.3. Combined Results
The results of the two parts of the study were somewhat different. In the first part, patients expressed complete confidence in the information provided by the oncology unit, describing it as accessible, complete, and understandable. In contrast, they expressed concern about information found on the Internet and the possibility of encountering false information.
In the second part of the study, after using the application, the participants viewed it as a positive addition to their existing sources of information. They expressed interest in the ease of access to information about practical events organized by other entities.
Comparing the results of each session, the participants who struggled with the tasks in the second part were the same individuals who do not use smartphones to access the Internet. Additionally, participants P2 and P5 had more difficulty navigating the application than the other participants.
## 5. Discussion
This focus group helps to choose functionalities and define the process of evolution and continuous improvement of the Xemio application. Selecting suitable candidates to participate in the focus group was essential to generate critical feedback and the necessary knowledge to identify unmet needs. A wide range of focus group participants provided valuable additional input from each participant.
The age of the patient is a key factor in determining the probability that the patient will incorporate technology, specifically this app, into their daily routine. The younger participants in the group had no difficulty navigating the app, while the older patients required assistance to complete tasks. Applications designed to support patients with cancer or chronic diseases may not be appropriate for those who have not acquired basic technology skills. As a result, these technological tools should not yet be considered a standard of care as they may exclude a significant portion of patients.
However, mHealth applications have the potential to become a normal part of the standard of care in the near future, as more cancer patients acquire the necessary technological skills to use these tools. It is important to involve potential users from the beginning of the design process and throughout its evolution. There is currently a lack of evidence regarding patient knowledge and participation in the development and evaluation of medical applications [11,35]. Typically, technologies are presented to patients without their involvement in the design process and only later are they asked about their usefulness in clinical practice. To address this issue, it is crucial to adopt a patient-centered approach and prioritize identifying unmet needs before beginning the design process.
## 6. Principal Findings
Despite the limitations of this focus group, the results suggest that while breast cancer patients believe they receive adequate care and that the hospital services meet their needs, there is still room for improvement in patient care and support. This highlights the need for more research and efforts to enhance patient care in this field, even though patients are currently satisfied with the care they receive.
The adoption of mHealth tools, such as the Xemio app, has the potential to revolutionize the way chronic patients receive care in hospitals. By using smartphones, tablets, and wearable devices, patients can remotely monitor their health and communicate with their healthcare providers, without having to visit the hospital as frequently. This not only saves time and resources for patients but also reduces the burden on hospitals and healthcare providers, enabling them to focus on providing more complex care to those who need it most.
Additionally, mHealth tools can provide real-time health data, allowing providers to make more informed decisions about a patient’s care. This can lead to improved outcomes and a higher quality of care for patients. With the increasing availability of sophisticated mHealth technologies, the potential for improving care for chronic patients is enormous, and it is an area that is receiving increasing attention from researchers, healthcare providers, and policymakers alike. This could eventually lead to overall quality improvements in patient care. This was demonstrated during the second session of the focus group when patients expressed how much they appreciated the app and found it informative. This conversation led to the patients wishing they had the option to use the app on their phones during their initial diagnosis, treatment, and ongoing cancer process.
## 6.1. Comparison with Prior Work
A few years ago, researchers conducted a review to evaluate the effectiveness of mHealth tools to support patients with chronic disease management [36]. The study, which referred to mHealth tools used for disease management as “mAdherence”, also explored the usability, feasibility, and acceptability of these tools. The researchers found that mAdherence tools and platforms were generally highly usable, feasible, and acceptable. However, they also pointed out that there is limited information available on how mHealth tools are designed to meet the needs of specific patient populations. For example, they noted that older patients may have difficulty traveling to a healthcare provider’s office and that mAdherence tools could ease this burden. The researchers recommended an iterative design process that includes systems and content development and multiple stages of user experience testing.
The following review article by Hamine et al. [ 36] found that 62 out of 107 studies explored the usability, feasibility, acceptability, or patient preferences for mAdherence interventions. The authors found that 27 studies in their search used randomized controlled trials (RCTs) to explore the impact on adherence behaviors, and significant improvements were observed in 15 of those studies. There were 16 out of 41 RCTs that showed significant differences between groups regarding effects on disease-specific clinical outcomes. The conclusion of the review article is that mHealth tools have the potential to facilitate adherence to disease management; however, the evidence to support its effectiveness is, so far, mixed.
## 6.2. Limitations
The oncologist treating the patient was present at the first session of the focus group. The presence of the oncologist may have changed what the participants revealed. They may have chosen not to share specific experiences because they thought it might affect their treatment or relationship with their doctor or the hospital.
Another limitation is that the sample size was very small and limited to a single focus group. Recent publications [37,38] suggest having at least three clusters to capture significance and saturation is necessary. Because of the COVID-19 emergency in June 2020, it was difficult to increase the number of focus group participants through recruitment and invitations to patients. The focus group was held at the Hospital Clínic de Barcelona, which was facing a shortage of resources due to the pandemic, making it challenging to schedule additional dates for the process. The focus group was carried out following all hygiene and safety regulations established by the government, and additional precautions were taken to avoid contact between participants considered to be at high risk.
## 7. Conclusions
While patients currently receive adequate care, there is always room for improvement, and mHealth tools have the potential to play a major role in enhancing patient care and support in the field of health and wellness.
Upcoming work will involve a long-term randomized pilot to investigate how using the Xemio app impacts the quality of life of breast cancer survivors, expected to be published during 2023. Further work will also involve the continuous evolution of the app to provide better and updated services to the users to support them throughout the cancer process, including patient evaluation tools, such as interviews and PREMS questionnaires. |
# Risk Factors for Mortality of Hospitalized Adult Patients with COVID-19 Pneumonia: A Two-Year Cohort Study in a Private Tertiary Care Center in Mexico
## Abstract
During the COVID-19 pandemic, the high prevalence of comorbidities and the disparities between the public and private health subsystems in Mexico substantially contributed to the severe impact of the disease. The objective of this study was to evaluate and compare the risk factors at admission for in-hospital mortality of patients with COVID-19. A 2-year retrospective cohort study of hospitalized adult patients with COVID-19 pneumonia was conducted at a private tertiary care center. The study population consisted of 1258 patients with a median age of 56 ± 16.5 years, of whom 1093 recovered ($86.8\%$) and 165 died ($13.1\%$). In the univariate analysis, older age ($p \leq 0.001$), comorbidities such as hypertension ($p \leq 0.001$) and diabetes ($p \leq 0.001$), signs and symptoms of respiratory distress, and markers of acute inflammatory response were significantly more frequent in non-survivors. The multivariate analysis showed that older age ($p \leq 0.001$), the presence of cyanosis ($$p \leq 0.005$$), and previous myocardial infarction ($$p \leq 0.032$$) were independent predictors of mortality. In the studied cohort, the risk factors present at admission associated with increased mortality were older age, cyanosis, and a previous myocardial infarction, which can be used as valuable predictors for patients’ outcomes. To our knowledge, this is the first study analyzing predictors of mortality in COVID-19 patients attended in a private tertiary hospital in Mexico.
## 1. Introduction
Two years after being declared a global pandemic by the World Health Organization (WHO) on 11 March 2020, the coronavirus disease-2019 (COVID-19), has caused more than 449,000,000 cases and 6.6 million deaths [1,2]. COVID-19, caused by the severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2), is transmitted primarily through large respiratory droplets. This disease presents with a wide array of clinical presentations, ranging from asymptomatic, mild respiratory, or extrapulmonary disease, to life-threatening respiratory failure, multi-organic failure, and death [1,3,4].
Due to the magnitude of the pandemic and the current absence of an effective curative treatment, several studies have reported the clinical and epidemiological characteristics of their respective populations [1,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27]. These can be used as a proxy for the prediction of patients’ outcomes. Currently, risk factors related to worse clinical outcomes and mortality include older age; male sex; obesity; comorbidities such as diabetes, hypertension, and heart failure; and laboratory features compatible with an inflammatory state [1,2,3,4,7,8,9,12,14,21,26,28,29].
Latin America and the Caribbean (LAC) has arguably been one of the areas most impacted by the pandemic, five of the region’s countries being among the 20 with the highest number of reported cases and deaths [30,31]. The pandemic has had a very elevated socioeconomic impact on the region, particularly affecting vulnerable populations: groups with a high poverty index or a lack of formal employment [21,31] as well as those with preexisting comorbidities, exacerbated by deficiencies of the health institutions in vulnerable countries [32], with most of these regions being unable to guarantee public healthcare to a considerable percentage of the population. As a response to the lack of complete public coverage, health systems in countries such as Mexico are forced to rely heavily on private spending [32,33,34].
The country has experienced six waves of the disease, resulting in more than 7.2 million cases and 331,407 deaths to date [35]. Despite not having the highest mortality rate of LAC, Mexico currently stands as the fifth country with the most deaths worldwide [35]. The alarming mortality, correlated with the aforementioned risk factors [21,23,36,37], can also be associated with the differences among healthcare institutions. Evidence suggests that the lack of homogeneity among available resources, infrastructure, quality of care, and standardized protocols may have resulted in a higher probability of dying from COVID-19 in public healthcare facilities than in private institutions [21,38,39,40]. Considering this, it is necessary to analyze the statistical behavior of the pandemic in public and private institutions independently. This would in turn present us with an image depicting the interaction between the pandemic and the two different healthcare environments, correlating with socioeconomic implications such as inequalities in healthcare access and cultural disparities of marginalized groups, which continue to impact the evolution of the pandemic in Mexico.
In this study, the findings from a 2-year retrospective large cohort study from a private tertiary care center in Guadalajara, Mexico, are reported. This study aims to describe and compare clinical characteristics, laboratory and radiological findings, and mortality among adult patients hospitalized with COVID-19 pneumonia in a Mexican private tertiary care center from April, 2020 to March, 2022.
## 2.1. Study Design
A retrospective cohort study was conducted at San Javier Hospital (SJH), a private tertiary care center located in Guadalajara, Jalisco, Mexico, that included all adult patients admitted to the hospital with a confirmed diagnosis of COVID-19 from 4 April, 2020 to 3 March, 2022. Patient admission was based on the National Institutes of Health (NIH) severity of illness categories [41], admitting all those with COVID-19 with severe or critical illness and those with moderate illness at high risk of progressing to severe disease, as determined by each attending doctor.
The primary outcome was in-hospital mortality without a set timeframe for it to occur. Inclusion criteria were: [1] adult age (≥18 years old), [2] patient admitted to SJH with a new diagnosis of COVID-19 pneumonia, [3] SARS-CoV-2 infection confirmed with RT-PCR of nasopharyngeal swab with the Berlin protocol, and [4] definite discharge or COVID-19-related death outcome. Exclusion criteria were: [1] interhospital transfer from our institution to another hospital and [2] patient discharged against medical advice. The present research was conducted in accordance with the Declaration of Helsinki, as we adhered to the General Principles of the World Medical Association; the importance of the objective outweighed the risks to the participants of the study, our research used accepted scientific principles based on the scientific literature, and all data was maintained with confidentiality [42]. The study was approved by the research ethics committee of the SJH with the register number 002-08-2022-MLZ. Due to the observational and retrospective nature of the study, no informed consent was required. Decisions regarding diagnostic approach, treatment, and follow-up were the responsibility of the attending physician, with consideration that during the pandemic different medical treatments were used based on the best scientific information available at each moment.
## 2.2. Data Collection
Epidemiological data were retrieved from the electronic medical record (TASY) of the primary and secondary evaluations performed by first-contact physicians at the respiratory care unit. Additional clinical and laboratory information, clinical outcome (survival or mortality), and pathway to death was obtained from the electronic medical record (EMR). Initial laboratory tests were defined as the first results available, typically within 24 h of hospital admission [24], including complete blood count, liver panel, basic metabolic panel, C-reactive protein (CRP), D-dimer, and Troponin I, among others.
## 2.3. Definitions
Co-morbidities were defined as follows: chronic obstructive pulmonary disease (COPD) as a diagnosis of postbronchodilator FEV1/FVC ratio of <0.70 [43]; asthma as established by the Global Initiative for Asthma 2020 [44]; chronic kidney disease (CKD) as a glomerular filtration rate below 60 mL/min for more than three months [45]; diabetes according to the guidelines of the American Diabetes Association [46]; hypertension as systolic blood pressure ≥140 mmHg and/or diastolic blood pressure ≥90 mmHg [47]; and immunosuppression as neutropenia (less than 500 neutrophils), with active malignant disease, asplenia, or under immunosuppressive treatment (prednisone >20 mg/day or other immunosuppressive drugs for at least 30 days) [21,22].
Definitions for the causes of death include: acute respiratory distress syndrome (ARDS) according to the Berlin definition [21,25], septic shock according to the 2016 Third International Consensus Definition for Sepsis and Septic Shock [48], and myocardial infarction following the guidelines of the Fourth Universal Definition of Myocardial Infarction [49].
## 2.4. Statistical Analysis and Tools
According to their distribution and type, the variables are summarized as mean and standard deviation or median with ranges and percentages (%), as appropriate. Demographic and clinical characteristics were compared between survivors versus non-survivors using a chi-square test and a t-Student test, as appropriate. Variables that proved to be statistically significant in the univariate analysis underwent multivariate ANOVA to discriminate confounding variables. The variables that remained significant with this analysis were assessed by Cox’s regression analysis method: forward likelihood ratio. We considered a two-tailed $p \leq 0.05$ as statistically significant.
The statistical software used for the analysis was SPSS 24.0 (SPSS Inc. Chicago, IL, USA). Figure 1 was created with Microsoft Excel version 2301 (Microsoft, Redmond, WA, USA). Supplementary Figure S1 was created with GraphPad Prism v.6 (GraphPad, Boston, MA, USA).
## 3. Results
In the study period, spanning from 4 April, 2020 to 3 March, 2022, 1377 patients were admitted under the diagnosis of confirmed SARS-CoV-2 pneumonia, 119 of which were excluded due to interhospital transfer or voluntary discharge against medical advice. The study population consisted of 1258 patients, of whom 1093 recovered ($86.8\%$) and 165 died ($13.1\%$). The median age was 56.2 ± 16.5 years, being 68.3 ± 14.2 years for non-survivors and 54.4 ± 16.0 for survivors. The mean length of stay was 12.2 ± 13.7 days, being significantly higher in those patients who died compared with survivors. In total, 243 ($19.3\%$) of patients were admitted to the intensive care unit (ICU), and 200 ($15.9\%$) were mechanically ventilated (MV). A significant association was observed between the need for MV or ICU admission and in-hospital death. Among survivors, 86 ($7.8\%$) received mechanical ventilation, and 107 ($9.7\%$) were managed in the ICU. Figure 1 shows patient distribution regarding the number of hospital admissions, hospital discharges, ICU admissions, and in-hospital deaths during the study period. Similar to those observed in the general population, three waves of disease are denoted in the figure during the study period, reaching the peak of hospital admissions in December 2020, August 2021, and January 2022.
Demographic, clinical, and laboratory characteristics at admission of survivors and non-survivors are shown in Table 1, Table 2 and Table 3. Several of these variables showed statistical significance ($p \leq 0.05$) in univariate analysis.
The mechanisms for death are summarized in Supplementary Table S1. The most common cause was multi-organic failure ($42.4\%$), followed by ARDS ($33.9\%$) and septic shock ($10.9\%$). Other causes included unstable bradycardia, pulmonary embolism, myocardial infarction, and hypovolemic shock, which were much less common. Vaccination status in both survivors and non-survivors can be seen in Supplementary Table S2. Observed and expected values of the variables that were analyzed using the chi-square test can be found in Supplementary Table S3. Nonparametric plots comparing MULBSTA, Charlson, and NEWS scales scores between survivors and non-survivors can be found in Supplementary Figure S1.
In the multivariate analysis (Table 4), the variables that independently predicted mortality, identified by Cox regression analysis, were older age (>60 yo), cyanosis, and previous myocardial infarction.
## 4. Discussion
To our knowledge, this is the first large cohort study of COVID-19 in-hospital mortality and the associated risk factors of patients attended exclusively in a private hospital in Mexico, and one of the few in LAC. In 2021, LAC was the region with the highest number of COVID-19 deaths and deaths per 1000 population, representing $28.8\%$ of global reported deaths while having only $8.4\%$ of its population [50]. In our cohort, in-hospital overall mortality was $13.1\%$, which contrasts with the mortality reported by other hospitals in this country (22–$53\%$) [21,22,23,25,26] as well as with some cohorts in other LAC countries [5,18].
The significantly lower mortality rate found in our cohort can be explained by several factors, namely, the fact that our hospital belongs to the private health subsystem compared with other Mexican cohorts based in public health services [21,22,23,25,26]. Márquez-González et al. [ 27], Carrillo-Vega et al. [ 51], and Salinas-Escudero et al. [ 52] analyzed the national database to identify the risk factors for hospitalization and death in the Mexican population, showing a lower patient survival rate among those hospitalized in public institutions. This problem is prevalent among health systems in most LAC countries, which, to varying degrees, all lack universal public health regimes, instead relying heavily on private subsystems and, in most cases, considerable out-of-pocket expenses [32,53,54,55]. In their cohort study of a private healthcare network in Brazil, De Oliveira et al. also reported a considerably lower mortality rate compared with other cohorts from the public subsystem in Brazil and other parts of the world [5]. Aside from age, which was also lower than the reported mean in other studies, they attributed the disparities between private and public hospitals to be a possible factor involved in this difference. The Mexican health system’s highly heterogeneous organization and quality of care have allowed discrepancies in healthcare to persist to date. The system of care is divided into four main subsystems (private healthcare providers and the public institutions Instituto Mexicano del Seguro Social (IMSS), Instituto de Seguridad y Servicios *Sociales para* los Trabajadores del Estado (ISSSTE), and Secretaría de Salud (SS)), all of which remain fragmented and incapable of delivering universal care [32,33,34,38,56]. Public institutions represent the health services with the highest demand, which puts them at a higher risk of exceeding their operating capacity—resulting in hospital saturation and heightened mortality [51].
Another factor to consider for the difference in mortality rates is that, while many Mexican cohorts analyzed the first months of the pandemic, our study spanned a 2-year period. Thus, the evolution of our clinical knowledge of COVID-19, a lesser degree of bed-saturation and overcrowding of critical areas, and the effect of vaccines over the last months of our studied period, most likely contributed to a decrease in in-hospital mortality. On the other hand, the inclusion of patients with an initial moderate NIH severity of illness probably contributed to this result, although only 90 patients with this characteristic were present in the study population. Finally, a factor that was not considered in our study was the effect of the newly developing COVID-19 variants. One of particular relevance is the Omicron variant reported in November 2021, the fifth variant of concern (VOC) posing a threat to global public health. Omicron emerged as the variant most mutated, transmissible, and resistant to immunotherapeutics and vaccines. Nonetheless, Omicron proved to be milder than the previous variants, mostly causing upper respiratory tract symptoms and resulting in low mortality rates [57,58].
In our study, $19.3\%$ of patients received care in the ICU and $15.9\%$ were MV. ICU and MV mortality were $55.9\%$ and $57\%$, respectively, similar to other Mexican [21,23,24,25] and global [5,20] cohorts. Both ICU admission and the need for MV were significantly more frequent in non-survivors, which has been commonly reported amongst many cohorts, highlighting the importance of ICU management and MV as predictors of death in patients hospitalized due to COVID-19 [5,20,25,26,27].
Hypertension and diabetes are comorbidities identified by several studies as risk factors for mortality. Although they were identified as predictive in the univariate analysis, they were not included in the final multivariate model. They both present in a high prevalence in LAC and the Mexican population [21,25,59]. Hypertension is one of the comorbidities that has most commonly been associated with increased mortality in COVID-19 patients, though the exact mechanism remains unclear [10,11,22,25,59,60,61]. Its prevalence in our cohort was similar to the national average ($31\%$) and to that of other LAC countries [4,21,25]. The use of ACEI/ARBS represented a significant difference between both groups. Although mediated by a possible mechanism by which RAAS blockers increase ACE2 expression, potentially increasing the risk of SARS-CoV-2 infection, the effect of ARB or ACEI use on disease severity is still controversial [22,25]. In our cohort, diabetes presented with a higher prevalence than the national average ($13.7\%$) [21]. As with hypertension, it has been associated with COVID-19 severity and mortality [3,19,59,61,62], with many proposed mechanisms, including reduced resistance to viral infections as a consequence of a sustained low level of immunity as well as vascular and heart damage due to longstanding disease [62].
Overweight status and obesity showed no difference between the two groups. Though it has been associated with increased disease severity in COVID-19 patients in some studies, the association remains unclear, with mixed results among the bibliography [5,18,25]. A meta-analysis conducted by Mesas et al. [ 61] showed that increased mortality was present only in studies with fewer chronic or critical patients, by which BMI did serve as a prominent prognostic factor only in studies with these conditions, which was not the case in our study.
Immunosuppression [3,5,12,15,62,63], cancer [10,59,61,62], and chronic kidney disease [12,17,20,52,61] are other important comorbidities that have been reported as predictive risk factors for mortality in different cohorts. Despite the fact that they were significantly more frequent in the mortality group in our cohort, they did not remain significant in the final multivariate model.
After the univariate analysis, significant variables were analyzed by multivariate ANOVA and then by Cox regression analysis to determine the explicative and predictive variables. In the resulting model, older age, the presence of cyanosis, and previous myocardial infarction were the main predictors of mortality, consistent with the findings amongst other cohorts. In several studies, age was found to be a main determinant of COVID-19-related in-hospital mortality, independent of other pre-existing comorbidities [64,65,66,67]. The median age in our study was 56.2 ± 16.5, similar to other large cohorts in our country [21,22,23,25,26]. As previously established, age has been reported as one of the most important risk factors, being associated with higher mortality plus extended hospital and ICU times [6,27,59]. In our study, age was identified as a risk factor for mortality (non-survivors, were, on average, 14 years older than survivors) and remained as an independent mortality risk factor after multivariate analysis. This may be explained by contributing factors such as age-related physiological changes, impaired immune function, and preexisting illnesses [18,20,59,62]. At this point in the pandemic, older age is well established as a strong predictor of severity and mortality in patients with COVID-19, which prompts early referral of older individuals for inpatient care [11,19,28,60]. In one study conducted in the same city as our present research, age, along with other factors, was also found to be a mortality predictor in multivariate analysis [25]. Another predictive variable was the presence of cyanosis. Although identified as a mortality-related risk factor in the univariate analysis of some studies, our cohort, to the best of our knowledge, is the first to include it in the final multivariate model [18,68]. Finally, the history of previous myocardial infarction was also an important predictor of mortality. The presence of cardiovascular disease has been extensively reported with worse outcomes in patients with COVID-19 [3,8,12,61,69]. Specifically, a history of ischemic heart disease was found to be a significant variable by some cohorts [3,20]. Similar to our results, one study also reported myocardial infarction as a predictor of mortality in the multivariate analysis [67].
An important aspect to consider while analyzing COVID-19 mortality is the evaluation of the role of SARS-CoV-2 infection in such deaths [70]. At the start of the pandemic, some COVID-19 deaths may have been misclassified as being due to other causes, while conversely, during peak pandemic periods, a bias in the opposite direction probably occurred [19]. Due to the lack of knowledge of the pathophysiology of COVID-19 death, as well as the high prevalence of comorbidities observed in deceased people who tested positive for SARS-CoV-2, the question of whether a patient died with or due to COVID-19 is still very much debated [71,72]. Assigning a primary cause of death to a deceased patient with multiple principal diagnoses that could lead to death has been challenging since before COVID-19 [73]. The problem of objectively identifying the “real” cause of death is not only relevant from a conceptual standpoint, but also has many practical consequences regarding epidemiology, public health interventions and policies, health communication to the public, and political decisions [74]. Although numerous observational studies have reported outcomes and risk factors for mortality in COVID-19, the accuracy of the causes of death has seldom been reported [75]. This can be due to many factors, including the methods of assigning primary cause of death, the impossibility of performing necropsies, and countries’ laws allowing only one cause to be reported on a death certificate [70,71,73,74,76] Regarding the limitations of this study, its retrospective nature makes it prone to under-documentation of many clinical variables, limiting the researchers’ capacity to obtain comprehensive data due to incomplete medical records. This was particularly relevant for determining the actual role of SARS-CoV-2 infection in each death, as the EMR often lacked the information necessary to evaluate whether COVID-19 was only an epiphenomenon for that particular death. Social determinants of the study population, such as median household income, were not assessed. As genomic sequencing data was not available, analysis of the predominant variants of concern in each wave could not be performed. Due to the changing nature of the pandemic, along with the growing understanding of the disease, clinical practice improvements were implemented, with the evaluation of such changes exceeding the scope of this study [5]. Finally, we excluded patients that did not have the entire course of disease in our institution, such as those discharged against medical advice or because of interhospital transfer, as we were therefore unable to assess their evolution. Despite these limitations, the size and duration of this study allowed us to provide a reasonably complete overview of the pandemic as it presented in our hospital [77].
Our study gains relevance as the socioeconomic impact of COVID-19 continues to impact the population of our country, worsening socioeconomic inequality: while nonvulnerable groups are given the option of more reliable services, the more marginalized populations are left with no choice but to attempt to receive care in saturated, underfunded, and often uncoordinated public health subsystems [78]. These disparities further heighten inequalities affecting vulnerable groups, including indigenous communities, migrants, people in overcrowded living conditions, informal workers, people with disabilities, and older adults, even more so in cases involving chronic diseases, which are also correlated with these same vulnerabilities [21,30,37,50,54,55,78]. While this is not limited to Mexico or LAC—the syndemic relationship among social inequalities, chronic diseases, and COVID-19 has been reported at an international level [79]—the public and private subsystems’ conditions, low healthcare spending, infrastructure, and other health-related policies have all had a considerably higher socioeconomic impact in LAC [32].
## 5. Conclusions
Mortality in hospitalized patients with COVID-19 in this Mexican private tertiary care center was $13.1\%$. Older age, the presence of cyanosis, and a previous myocardial infarction were the most significant independent risk factors for mortality in adult patients hospitalized with COVID-19 pneumonia in our 2-year cohort. Considering the significant disparities in the quality of care that exist between the private and public health subsystems in Mexico, our results gain special significance, as they contribute to a more complete overview of the healthcare system and its interaction with the pandemic. |
# Becoming a Paralympic Champion—Analysis of the Morpho-Functional Abilities of a Disabled Female Athlete in Cross-Country Skiing over a 10-Year Period
## Abstract
Changing medical classification into the functional one in disabled cross-country skiing means that the athlete’s predispositions and performance abilities most of all determine the final result in cross-country skiing. Thus, exercise tests have become an indispensable element of the training process. The subject of this study is to present a rare analysis of morpho-functional abilities in relation to the implementation of training workloads during the training preparation for a Paralympic champion in cross-country skiing when she was close to her maximal achievements. The study was performed to investigate abilities evaluated during laboratory tests and how they relate to performance outcomes during major tournaments. An exercise test to exhaustion on a cycle ergometer was performed three times a year on a cross-country disabled female skier over a 10-year period. The morpho-functional level which enabled the athlete to compete for gold medals in the Paralympic Games (PG) is best reflected in the results obtained by her in the tests in the period of direct preparation for the PG and confirms optimal training workloads in this time. The study showed, that the VO2max level is presently the most important determinant of physical performance achieved by the examined athlete with physical disabilities. The aim of this paper is to present the level of exercise capacity of the Paralympic champion based on the analysis of the results of the tests in relation to the implementation of training workloads.
## 1. Introduction
From the beginning, as a rehabilitation tool, sport for the disabled athlete has evolved to competitive sport at the Olympic level [1]. A real breakthrough in competition regarding cross-country skiing for the disabled took place at the beginning of the new millennium. Changing medical classification into the sport-specific functional one in disabled sports means that the athlete’s predispositions and performance abilities determine the final result [2]. The number of sporting events was limited by reducing the number of competition classes [3,4,5]. In the 2006 Paralympic Games (PG) in Turin, competition in cross-country skiing was narrowed down to three groups of participants: visually impaired (VI) athletes, standing skiers with physical disability, and athletes using a sit-ski [3]. For example, standing skiers with physical disability are grouped into the following sport classes: with lower limb impairments (LW 2, LW 3, LW 4) and upper limb impairments LW $\frac{5}{7}$, LW 6, LW 8, and LW 9, which combined upper and lower limb impairments [6]. In the case of the athletes with those enumerated sport classes, VO2max is different because of different disabilities and less muscles mass are involved during skiing. That is why Realistic Handicap Competition and Kreative Renn Ergebnis Kontrolle (RHC-KREK) was introduced in 2006 PG with regard to individual athletes in order to level out the chances of athletes with various physical disabilities competing in the same competition group [7].
The aforementioned changes led to the fact that training became more professional. During the 2002 PG preparations, training was divided into annual cycles as there were 4 years ahead to prepare the cross-country skiing Paralympic team [8]. Due to the fact that training-related overloads and, as a consequence, general overtraining [9] and injuries occurred [10,11], risks similar to those of able-bodied cross-country skiers increased. Subsequently, assessment of general health with preliminary screening, and particularly, assessment of endurance, has gained significance [12]. Hence, endurance tests have become an indispensable element of the training process, particularly in cross-country skiing for the disabled. A rational training program must be based on objective premises, while the selection of load should be compliant with the athlete’s individual endurance predispositions [13]. The literature of the subject lacks data concerning the physiological profile of the disabled skiers and, to date, little information has appearedabout performance abilities of disabled cross-country skiers. In the literature, there are no data published concerning the physiological profile of the top class athletes with physical disability, especially in the standing position (LW 2-LW 9). It seems that the scarcity of publications results mainly from the fact that there are only small groups of athletes with disabilities and various types and levels of disability who were examined, which is an obstacle in analyzing data and publishing the results. Additionally, there are rare data concerning athletes, e.g., with visual impartment (VI) and intellectual disability (ID). For example, Bernardi described research only on athletes with lower-limb dysfunctions, including sit-skiers or athletes performing winter sports other than cross-country skiing [14,15]. In turn, Bhambhani discussed physical capacity of a narrow group of skiers with various disabilities but included only one female with visual impairment who was skiing in a standing position [16]. Therefore, a review of the literature confirmed the scarcity of publications concerning the assessment of physical capacity of cross-country skiers with motor disabilities. Among the literature, there is one study on aerobic capacity of cross-country skiers, including three females, but the study is of athletes with intellectual disabilities (ID) [17].
The aim of this paper is to present the level of exercise capacity of a Paralympic champion based on the retrospective analysis of the laboratory test results in relation to the implementation of training loads.
## 2.1. Study Participant
The study subject was a female with physically disabled upper limbs, which she lost in a post-traumatic amputation after an agricultural accident at the age of 3. She began training and participating in athletic runs at the age of 22, and at the age 23 (in the year 2000) she took up professional cross-country skiing and qualified as an athlete to sports class LW $\frac{5}{7}$, i.e., skiing without poles. It was confirmed before the first international competition according to the Paralympic sport classification [18].
She was 29 years old when she won two gold medals in the 2006 PG and 33 years old when she won a bronze medal in the 2010 PG in Vancouver [19].
## 2.2. Health Evaluation
Every time before the physical exercise testsuntil exhaustion, a pre-participation examination (PPE) was carried out which consisted of a general medical examination and an assessment of ECG and blood and urine tests.
## 2.3. Exercise Testing
In the absence of contraindications, the exhaustion tests were performed three times a year from 2001 to 2010, i.e., before the General Preparation Phase (I), before the Specific Preparation Phase (II), and during the Competitive Phase (III) [8]. The procedure started with a general warm-up and was followed by the exercise test performed on a Monark cycle ergometer. Exercise loads starting from 60 W were increased by 30 W every two minutes until volitional exhaustion, normally occurring after 10–15 min. Before, during, and until the 3rd minute after the test, such variables as minute ventilation of the lungs (VE), oxygen consumption (VO2), and carbon dioxide production (VCO2) were registered with the use of a ergospirometer (MES system, Dymek, A., Kraków, Poland). At the same time, heart rate (HR) was monitored by a Polar-Electro device. Then, the maximal oxygen uptake (VO2max) and ventilatory threshold (VT) were calculated. The VT was determined based on criteria described in the literature [20,21]. Arterialized blood was taken from an earlobe twice, i.e., before the test and 3 min after finishing the work on the ergometer. Blood lactate (La) concentration was determined using DR.LANGE LP 20 device. Body weight (BW) and the percentage of fat (%F) were measured with the use of a Tanita BF-662W scale before and after the test. In order to determine if a mechanical effect of the performed test worked, general quantity of the performed work (kJ) and achieved power (W) was calculated. The tests were always performed in the same room and with the use of the same equipment. Each time, the ergospirometer was calibrated with a standard mixture of calibration gases and the flow sensor was calibrated using a 3-litre hand calibration pump, taking into account atmospheric pressure as well as air temperature and humidity of the room where the tests were conducted. The calibration was performed under the supervision of a representative from MES, the manufacturer of the ergospirometer. The load was chosen on an ergometer which was calibrated each day before starting the exercise protocol. The cycling frequency was provided by a metronome. The described procedures guaranteed the repeatability of the test.
## 2.4. Training Data
Data from the realization of training workloads before the PG in Salt Lake City in 2002 and in Turin in 2006 were based on annual reports of subsequent coaches of the Paralympic team published by Chojnacki [8]. In turn, the realization of training loads in the $\frac{2009}{2010}$ season is based on the reports that the athlete herself prepared according to the same scheme.
## 2.5. Analysis
In this study, selected results of exercise tests carried out in the periods directly preceding participation in the PGs are presented. During these periods, the athlete reached the highest exertional capability due to training optimization.
The data obtained from endurance tests underwent descriptive analysis. The level of VO2max and blood La as well as work performed during the tests were assessed retrospectively in the context of the applied training workloads with the use of a visual inspection method and qualitative analysis. Additionally calculated were the mean, standard deviation, and coefficient of variation for observation values.
This research was approved by the Bioethical Commission of District Medical Chamber (No. 70/KBL/OIL/2007). The athlete was fully informed of the purpose, terms, and conditions of the tests. The participant gave written informed consent in accordance with the Declaration of Helsinki before the start of the study and provided her consent to publish the reports in the future.
## 3.1. Health Status
The PPE carried out periodically revealed no significant health-related contraindications to competitive sport. The athlete did not take part in only one test due to an upper respiratory tract infection in the season of $\frac{2001}{2002.}$ The athlete’s training did not cause any overload changes in particular spine sections due to asymmetry in the length of the left and right forearm stumps.
## 3.2. Performance Abilities in Selected Seasons
During the follow-up in the years 2001–2010, a number of physical exercise tests were conducted by the athlete. The scores that were chosen for this study are characterized by morpho-functional features of the athlete that she demonstrated in her representative preparation season, i.e., directly preceding her participation in PG 2002, 2006, and 2010.
## 3.2.1. Season 2001/2002 before the Salt Lake City PG
The athlete took part in the laboratory physical exercise tests twice, i.e., during and after the preparation period. Anthropometric measurements performed during further tests revealed that her body weight increased and the amount of fat was between 17 and $24\%$, which meant a normal value for women.
The athlete’s HR on finishing Test 1 was above 190 bpm (beat per minute). The HR scores at the level of anaerobic threshold represented $84\%$ of HR. Test 2 produced similar results. However, on this occasion the HRmax of the athlete was much lower and amounted to only 175 bpm. This resulted in a significant decrease in maximal oxygen intake, which in Test 1 amounted to 42.8 mL/kg/min and in Test 2 only 40.2 mL/kg/min. It was also influenced by an increase in body weight from 48.8 to 51.0 kg. Additionally, minute ventilation decreased from 84.4 to 74.9 L/min.
The work performed in this test characterized by the duration of the effort. The athlete did not enhance her performance compared to the first test. In this case, the duration of the effort was 10 min in both tests with the power output at the level of 180 W. The blood La at the time of exhaustion amounted to 12.6 mm/L in the first and 10.7 mmol/L in the second test.
## 3.2.2. Season 2005/2006 before the Turin PG
During the training season of $\frac{2005}{2006}$ (Table 1), the examined athlete took part in three tests. The morphological indices in consecutive examinations showed body weight and body fat decreased by $16.3\%$, $13.3\%$, and $14.9\%$. Heart rate during the tests was (187, 192, 188 bpm in successive tests), which proves the athlete’s full engagement in the activity. The percentage value of HRVT ranged between 83 and $89\%$ HRmax VO2max in two tests reached the level of 51.8/mL/kg/min, but it was lower (45.5/mL/kg/min) in Test 2.
The athlete’s minute ventilation in Test 1 (95.0 L/min) and Test 2 (99.6 L/min) was at a higher level than average and increased to the level of 107.4 L in Test 3. The duration of effort in the three tests was 14′30″, 10′, and 13′, respectively. The maximal power output achieved was 240, 210, and 240 W and at VT power was 150, 165, and 150, respectively. The blood La at the time of exhaustion amounted to 10.6 mmol/L in the first test, 12.45 mmol/L in the second test, and 9.65 mmol/L in the third test. Respiratory exchange ratio (RER) in all tests was always above 1.1.
## 3.2.3. Season 2009/2010 before the Vancouver PG
As in the previous season, the subject took part in three tests (Table 1). Compared to the results from the tests carried out 4 years before, morphological indices in subsequent tests revealed a slight increase in body mass and fat tissue which, in turn, meant a decrease in non-fat body mass.
HR in Tests 2 and 3 conducted in this period was lower than the expected level calculated with the formula “220 minus age”. The values of VO2max in Test 1 were low, while in Tests 2 and 3 they reached the level of 51.3 and 53.8 mL/kg/min, which proved the subject’s high aerobic performance. The duration of effort and achieved power were the same in all the tests, i.e., 11 min and 210 W. The blood La at the time of exhaustion, which amounted to 11.7 mmol/L in the first test, decreased to 7.36 mmol/L in the third test.
## 3.3. How to Become a Paralympic Champion—Training Data in the 2005/2006 Season
In the season of peak sport achievements ($\frac{2005}{2006}$), 265 training hours were realized, where the majority of the training work ($75.9\%$) was below at the VT, i.e., HR not exceeding 164 bpm, which was $85\%$ HRmax. The remaining training loads included effort at the VT and sporadically above the VT. There was only small fraction of the training workload performed above the VO2max in a form of short accelerations lasting a few seconds each. Therefore, in general more than $75\%$ of the training workload was performed below the VT and about $25\%$ in the range between the VT and VO2max [8].
A competition period is a period of training camps divided by two cycles of competitions. Afterwards, a direct preparation period for the Turin PG in 2006 started. At that time, a 10-day training camp in the high mountains was held followed by a 1-week microcycle with low training capacity. The intensity of training effort in that period at the ventilatory threshold was lowered to $50\%$ at the cost of more intensive effort at the ventilatory threshold and above. General training capacity in the $\frac{2005}{2006}$ season increased slightly by $3.1\%$ compared with the $\frac{2001}{2002}$ season [8]. In turn, in the $\frac{2009}{2010}$ season, training capacity was by far bigger and included 414 h in total, i.e., as much as $22\%$ more than in the $\frac{2005}{2006}$ season. Training capacities in particular seasons in the preparation and competition periods are compared with selected results as presented in Figure 1.
## 3.4. The Morpho-Functional Abilities of the Paralympic Champion in 2006 in Cross-Country Skiing
The morpho-functional level which enabled the athlete to compete for gold medals in the Paralympic Games 2006 is best reflected in the results obtained by her in Test 3 in the period of direct preparation for the PG. In this period of top performance, the results of morphological tests were as follows: height—160 cm, body mass—51.7 kg, fat tissue—$14.9\%$, body water—$58.9\%$, and non-fat body mass—44 kg. In this period, the most stable body mass level, the lowest percentage of fat, and the highest non-fat body mass level were observed compared to the previous training periods. Maximal values of the selected exercise physiological variables reached by the subject in the period of $\frac{2005}{2006}$ were as follows: heart rate—188 bpm, maximal oxygen intake—51.30 mL/kg/min, maximal minute ventilation—107.4 L/min and maximal blood lactate concentration—9.65 mmol/L with the duration of exercise—13 min. Enumerated oxygen pulse at VO2max amount 14.09 mL/bt. Values at the VT were as follows: heart rate—164 bpm, maximal oxygen intake—39.3 mL/kg/min, power 150 W, pedaling economy 13.54 mL/W, and in the case of VO2 net,12.08 mL/W.
## 4. Discussion
The longitudinal analysis of the laboratory test results in relation to the training workloads is very important for successfully facilitating potential modifications of the training process and thereby obtaining optimal performance. It is common knowledge that physical endurance, i.e., an ability to sustain long-term or hard work without signs of fatigue leading to profound systemic changes, as well as post-exertion recovery abilities, determine the performance of cross-country skiers. Physical exercise tests are used to provide information regarding a current level of endurance capabilities of athletes [22]. Exercise tolerance for disabled sports depends on a number of factors, such as metabolic profile/capacity [23], body type, and build [24,25]. The impact of these factors on endurance varies depending on the type, intensity, and duration of physical activity during cross-country competition [26]. Sports results in cross-country skiing are alsoaffected by other external factors, such as equipment, ski waxing, snow conditions, area configuration or skiing tactics, and internal factors, e.g., economy of skiing, biomechanical techniques, or anthropometric attributes [27]. Additionally, in disabled sports, a type of disability, e.g., visual disability (VI), intellectual disability (ID), or motor, muscle, and joint coordination, affects all the aforementioned factors [28]. However, the physical attributes of an athlete, particularly physical endurance, constitute the main factor. It is affected by the performance of many systems, including the cardiovascular system and the respiratory system which, to a large extent, are responsible for the aerobic potential of an athlete and especially for maximal oxygen intake [22,29]. However, there is a scarcity of studies regarding endurance capabilities of disabled athletes performing winter sports [30].
## 4.1. Maximal Oxygen Uptake
First of all, the plateauing of VO2- in all tests should be mentioned, which was usually visible at 30–60 s before termination of the exercise. Accordingly, the adopted observation statement classified the criterion for reachingVO2max (the stabilization of oxygen uptake plateaued despite the increase of the load in the exercise and there was notable stabilization of HRmax. Additionally, RER was minimum above 1.0 or more, >1.10–1.15, and La > 8–10 mmol/L). Admittedly, plateau in oxygen consumption is the primary means of confirming that maximal oxygen uptake is attained during an exercise test to exhaustion, but it may be of crucial importance for athletes with intellectual disabilities due to a misunderstanding of test methodology [31] and the need to maintain peak effort as long as 30–60 s.
However, what causes expression of a plateau in VO2 at the end of incremental exercise is still unresolved. It is arguable that plateauing depends on the adopted definition and may be a primarily methodological issue and not a physiological issue. Demonstrating data may encourage the use of more objective and accurate plateau criteria and modify the current practice of using the obsolete criterion to confirm VO2max [32].
The VO2max, as in this case, which can be compared with data from the literature [29,33] is the most significant factor in cross-country skiing. In the direct preparation period before the 2006 PG, the VO2max that was achieved by the subject was 51.30 mL/kg/min (2.65 L/min). In turn, the highest level of the maximal oxygen uptake, 2.74 L/min (53.80 mL/kg/min), in the whole long-term observation period was noted during the test carried out before the 2010 PG. As a comparison, an average VO2max in the test performed by three female athletes with intellectual disability was 51.8mL/kg/min [15] and theVO2max reached by a female with visual impairment was 56.9 mL/kg/min [14]. Taking this into account, the maximal oxygen uptake at the level of 51.30 or 53.8 mL/kg/min may be seen as similar to other results and, according to the literature, it is a high level for a female athlete.
However, the maximal oxygen uptake achieved by the examined athlete and the above cited results of athletes with various types of disabilities differ from the VO2max levels of able-bodied female skiers who obtain results at the level of 65–70 mL/kg/min and more [33]. It is worth noting that with regard to the initial tests on a cycling ergometer in 2001, the maximal oxygen uptake of the subject increased from 40.2 to 53.80 mL/kg/min. However, it should be emphasized that the studied athlete, although physically active since her trauma, started professional cross-country skiing training very late, i.e., at the age of 23, i.e., when the possibilities to improve oxygen uptake are limited for those over 30 years old [29]. However, I think the regular training compensates for the age and it is possible even after turning 100 years old [34].
Furthermore, in general, VO2max during cycling exercise—as in this case—can be 10–15 % lower than during running or skiing due to the involvement of less muscle mass during cycling than during running or skiing. In the case of the athlete studied, however, VO2max during cycling was probably not very different from VO2max during skiing as she did not fully use her upper body during skiing due to her disability (a partial upper limbs amputation). In fact, the best measurement of oxygen uptake is the measurement taken during the field test with the use of a mobile ergospirometer. Bernardi carried out such research focusing on the biomechanics of running and training implications [15]. However, that research did not include any athlete with the dysfunction of the upper limbs, so it is hard to compare the results. In turn, other research confirmed that significantly lower levels of VO2max and maximal heart rate are achieved by female and male athletes with spinal cord injuries (paraplegia, tetraplegia) competing in cross-country skiing in a sitting position [14].
For a complete analysis of endurance capabilities of the athletes, it is significant to know the values of the described indices at the ventilatory threshold. The presented results of the study participant show that in the training process, the VO2max not only increased, but what is even more significant is that the anaerobic processes contributed at a higher percentage of VO2max (Table 1). *In* general, the ventilatory threshold moved to the right, thus increasing the possibility to continue exercising without suddenly increasing fatigue. Moreover, during the peak effort, in the $\frac{2009}{2010}$ season, decreasing concentration of La to 7.36 mmol/L in tests was noted, although time of effort (11 min), i.e., the quantity of work, did not change. It suggests a successive improvement in exercise tolerance.
## 4.2. Selected Physiological Parameters
Certainly, apart from maximal oxygen intake, other physiological indices affecting its level (e.g., HRmax and VEmax) are also significant in terms of assessing endurance capabilities of an athlete. Thus, HRmax of the study participant reached high values in the tests during the periods of her top performance in subsequent Paralympic seasons (191,192,189 bpm, respectively) and were higher than the age-predicted maximum HR. Similar high values of HRmax (192 bpm) were noted by Bhambhani in a 30-year-old athlete with visual impairment in the test [16]. Both examples confirm that the achieved VO2max was at the highest possible level. However, it seems interesting that HRmax values noted in three young athletes (aged 17–19) with mental disabilities were at the level of 179, 178, 179 bpm which, in turn, was much lower than their age-predicted maximum HR. In this case, the achieved oxygen intake value may be seen only as VO2peak.
In turn, the levels of VEmax in athletes with mental disabilities differed (84.9, 99.8, 120.1 L/min), while in an athlete with visual disabilities, the maximum minute ventilation reached the value of as much as 140.3 L/min [16].
## 4.3. Training Data and Analysis
Across all measures, BM, FFA, HRmax, HR at ventilatory threshold, HR (max%), and VO2 (max%) were the most stable, with coefficient of variation (CV) in the 3–$6\%$ range. La at rest and effort, with CV in the 19–$25\%$ range, and fat (CV = $17\%$) had the highest coefficient of variation. The analysis of the presented training workloads showed a considerable domination of aerobic training, which is compliant with the recommendations resulting from subsequent tests in the years 2001–2006. Simultaneously, a considerable intensity of training and control competitions gave proper results during the first Test 1 in season $\frac{2005}{2006}$ in the form of an increase not only in VO2max by approximately 17 % but also in the ventilatory threshold level. Despite high oxygen consumption and other physiological test results, poorer results achieved during the 2010 PG (may be explained by a large increase in training capacity compared to the $\frac{2005}{2006}$ season (Figure 1). Theoretically, these assumptions are confirmed by sports results in the following season ($\frac{2010}{2011}$), in which the athlete won a 5 km freestyle run during World Championships as well as one year later, when she received the Crystal Globe as a winner of the World Cup Series $\frac{2011}{20129}$ [35]. Soon after this, the athlete became pregnant. As her maternity leave began 1 year prior to the next PG, she did not manage to achieve the required level of performance, which was confirmed by the control competitions. For this reason, she did not take part in the 2014 PG.
## 5. Practical Applications
The studied example shows morpho-functional capabilities (requirements to become a Paralympic champion) which should characterize an athlete competing for medals in disabled cross-country skiing during the Paralympic Games. It may be concluded that implementing a regular health assessment aimed at improving endurance capabilities of the studied athlete in 2001 was justified and innovative since, with the present level of competition in Paralympic skiing, achieving a high level of endurance and optimal health disposition during the PG is a factor determining the sports result. Thus, the laboratory exercise tests preceded by a health status evaluation are becoming an indispensable element of the training process, while the selection of loads must be based on objective factors and individual endurance predispositions of an athlete. British experiences from the Olympic and Paralympic Games in London in 2012 recommend implementing the model of combining medical care with a training process in order to achieve ethical and functional balance between medical care and optimization of sports performance [36]. This underlines the importance and the validity of the comprehensive sports and medical care system applied in Paralympic cross-country skiers. In summary, the above data provide a unique insight into the characteristics required to succeed in cross-country skiing at the PG.
## 6. Limitations and Future Research Directions
A low number of subjects, especially as described in this paper, is one of the major limitations of this study, but there is only one champion always. Additionally, other factors that may influence an athlete’s performance, such as diet, rest, supplements, and medications, were not taken into account in the study.
In the future, similar laboratory physical exercise tests should be carried out among other disabled athletes at top levels from class LW 2, LW 3, LW 4, LW 6, LW 8, and LW 9, and the obtained data should be compared. It could be useful to more objectively estimate “handicap” (RHC-KREK) for different disabilities.
## 7. Conclusions
The VO2max level is presently the most important determinant of physical fitness, achieved by the examined athlete (51.3–53.8 mL/kg/min) with physical disabilities (a partial upper-limbs amputation) and is comparable with the level achieved by athletes with intellectual disabilities or visually impaired competitors which start in a standing position too.
Maximal values of the selected exercise physiological variables reached by the subject in the period of direct preparation for the PG 2006 were as follows: heart rate—188 bpm, maximal minute ventilation—107.4 L/min, and maximal blood lactate concentration—9.65 mmol/L.
The oxygen pulse at VO2max amount 14.09 mL/bt. Values at the VT were as follows: heart rate—164 bpm, maximal oxygen intake—39.3 mL/kg/min, power—150 W, pedaling economy—13.54 mL/W, and in the case of VO2 net—12.08 mL/W.
The presented physiological, biochemical, and mechanical values show the level of endurance capabilities which characterized the female athlete who won 2 gold medals during PG 2006.
Overdose of training workloads in the season $\frac{2009}{2010}$ was one of the reasons for a decrease in athlete performance despite the high level of VO2max.
Systematic medical supervision and pre-participation evaluation of a disabled athlete is very important in order to detect underlying medical problems and overtraining markers that may limit the performance or even place the athlete at “increased risk”.
The analysis of the author’s own study confirms that, in light of introduced changes in medical classification into a sport-specific functional one, the final results in cross-country skiing depend more on physical performance level than on disability type. |
# Life Habits of Healthcare Professionals during the Third Wave of COVID-19: A Cross-Sectional Study in a Spanish Hospital
## Abstract
[1] Background: To describe sleep quality, eating behaviour and alcohol, tobacco and illicit drug use among healthcare staff in a Spanish public hospital. [ 2] Methods: Cross-sectional descriptive study examining sleep quality (Pittsburg Sleep Quality Index), eating behaviour (Three Factor Eating Questionnaire (R18)), tobacco and drug use (ESTUDES questionnaire) and alcohol use (Cut down, Annoyed, Guilty, Eye-opener). [ 3] Results: 178 people, of whom $87.1\%$ [155] were women, with an average age of 41.59 ± 10.9 years. A total of $59.6\%$ of the healthcare workers had sleep problems, to a greater or lesser degree. The average daily consumption was 10.56 ± 6.74 cigarettes. The most commonly used drugs included cannabis, occasionally used by $88.37\%$, cocaine ($4.75\%$), ecstasy ($4.65\%$) and amphetamines ($2.33\%$). A total of $22.73\%$ of participants had increased their drug use, and $22.73\%$ had increased their consumption during the pandemic, with beer and wine accounting for $87.2\%$ of drinks consumed during this period. [ 4] Conclusions: In addition to the psychological and emotional impact already demonstrated, the COVID-19 crisis has repercussions on sleep quality, eating behaviour and alcohol, tobacco and drug consumption. Psychological disturbances have repercussions on physical and functional aspects of healthcare workers. It is feasible that these alterations are due to stress, and it is necessary to act through treatment and prevention as well as promote healthy habits.
## 1. Introduction
In the healthcare field, the different stress-generating situations have multiple repercussions, not only on business operations but also on the health of the patient and the workers involved. With this in mind, reviews of the literature describe how exhaustion derived from work stress affects the worker’s commitment to the organization, their productivity, patient safety and satisfaction and the quality of care [1,2]. Similarly, it can also modify the health behaviours of the worker and push them away, in certain situations, from the recommended guidelines [3].
Along these same lines, sleep is one of the factors that most contribute to physical and psychological well-being. Nursing professionals are usually, within the healthcare community, the most affected by sleep disorders [4]. In fact, those who suffer from this type of alteration have a high risk of developing performance decreases and making errors when administering medication [5]. Likewise, sleep disorders are related to the presence of multiple health problems [6]. An example of this is the consequences derived from the alteration of the circadian rhythm that occurs in workers who have night shifts or irregular shifts. Going even further, it has been possible to verify the relationship between this type of alteration and the presence of diabetes mellitus [7], cardiovascular diseases, metabolic syndrome or cancer [8].
Another consequence of suffering from sleep disorders is the limitation that those affected have in managing stress [9]. In this sense, although it has been seen how nurses (specifically) have a series of coping resources that can be classified as healthy (for example, socialization), they can also activate other resources that are not beneficial for their health, such as alcohol and tobacco use, social avoidance or anger displacement [10]. Regarding the consumption of alcohol, tobacco and illicit drugs, Baldisseri et al. [ 2007] [11] estimated that approximately $10\%$ to $15\%$ of healthcare professionals will abuse drugs or alcohol at some point in their professional life. In fact, they demonstrated a higher rate of abuse of benzodiazepines and opiates. Blake et al. [ 2011] [12] found that, of a sample of 325 nurses, two-thirds exceeded the recommended maximum daily intake of alcohol, and nearly one-fifth were smokers. A review of the literature carried out by Nilan et al. [ 2019] [13] reported a prevalence of tobacco use in nursing between $21\%$ and $25\%$, varying according to the socioeconomic level of the country.
Similarly, shift work, as an example of a stress-causing agent, negatively influences dietary habits and the weight of workers, increasing the prevalence of obesity. A higher frequency of food intake and/or consumption of poor-quality food has also been noted among shift workers [14,15,16]. The emotions that are generated in complex situations are capable of modifying eating behaviour, marking, for example, certain preferences for some foods or even modifying caloric intake [4].
In Spain, the SARS CoV 2 COVID-19 pandemic hit hard. By 30 April 2021, 78,216 people had died, and more than 80,000 healthcare workers had been infected [17]. This situation has meant that healthcare workers in our country have faced numerous work stressors, such as long working hours and/or work overload, among others. As a consequence, wave after wave, levels of anxiety and depression have progressively increased in nursing staff [18], and so have multiple sleep disorders in the healthcare community [19], with a significant impact on both physical and mental health [20,21,22,23,24]. Given healthy lifestyles among healthcare professionals may be compromised by the multiple consequences of the current pandemic, it is necessary to explore what impact the third wave of COVID-19 has had on the health of our healthcare professionals. Thus, the aim of this present study was to describe the sleep quality, eating behaviour and alcohol, tobacco and drug consumption of healthcare workers in a Spanish public hospital. In doing so, we aim to highlight the importance of health-related habits, as well as the need to promote strategies to improve these habits and, consequently, the well-being of these workers.
## 2.1. Design
An intervention-free cross-sectional descriptive study was proposed, carried out in the months of February to March 2021 through an online, anonymous, and completely voluntary questionnaire and developed through the Google Forms® application. With the aim of reaching the largest possible number of healthcare workers in the shortest possible time, the dissemination was carried out through an instant messaging platform (WhatsApp).
The STROBE checklist guidelines for observational research have been followed.
## 2.2. Participants and Selection Criteria
All healthcare personnel of legal age, who had a working relationship with the health centre, were considered to participate. As an inclusion criterion, the healthcare personnel had to present the informed consent document covered and signed. The final sample was made up of 178 subjects.
## 2.3. Measurements and Instruments
Several sociodemographic variables were considered (age, sex, marital status, work service, seniority, type of contract, professional category and type of cohabitation), and through the Pittsburgh Sleep Quality Index (PSQI), sleep quality was evaluated. This questionnaire validated in Spanish [25] contains a total of 24 items that are grouped into 7 dimensions, which provide information on the different factors that affect sleep quality. Thus, subjective sleep quality refers to the subject’s assessment from 0 (very bad) to 3 (very good). Similarly, sleep latency measures how long the subject thinks it takes them to fall asleep, while sleep duration reports the actual number of hours a person sleeps at night. The efficiency of habitual sleep results from the percentage relationship between the time the subject believes they are asleep and the time they have been lying down. On the other hand, sleep disturbances inquire about the frequency with which alterations are noticed. Finally, it also contemplates the use of sleep medication and daytime dysfunction, understood as the impact of sleep on the development of daytime activities. The graphic representation of the scores obtained in each of the components allows us to clearly see where the problems related to sleep lie. The sum of the scores of the 7 dimensions gives a score varying from 0 to 21 points (the higher the score, the worse quality of sleep). Setting a cut-off point of 5 points for its interpretation, a difference is made between good sleep quality (scores below 5) and poor sleep quality (higher scores) [25].
Eating behaviour was measured using the Three Factor Eating Questionnaire (R18) (TFEQ-R18). This validated tool [26] consists of 18 items with a Likert-type response model with 4 possible options from 1 (rarely) to 4 (always). Thus, it evaluates three dimensions of eating behaviour: uncontrolled intake (tendency to eat more than usual due to loss of control when eating with a subjective sensation of hunger); emotional eating (the inability to resist emotional cues or eat in response to negative emotions); and cognitive restriction (the conscious restriction of eating aimed at controlling body weight and/or promoting weight loss). The three domains are converted to a scale from 0 to 100 (de Lauzon et al., 2004) according to the following equation [(raw score−lowest possible raw score)/possible raw score range) × 100]. Thus, higher scores indicate a higher probability of the domain to which they refer. This test presents appropriate reliability coefficients for the three subscales (ranging from 75 to 85) that are also indicated in a nursing population (85 to 90) [27].
Related to tobacco use (daily use, in the last 30 days, in the last year and starting age) and use of drugs that are illegal in Spain (use, type of drug and use in the last 12 months), the ESTUDES-validated questionnaire (Survey on Drug Use in Secondary Education in Spain) belonging to the National Drug Plan was used. This validated tool aims to collect information on drug use and other addictions in order to design and evaluate policies aimed at the prevention of this type of substance and the problems derived from it, mainly focused on the family and/or school environment. A total of 10 items referring to the consumption of said substances (6 items on tobacco and 4 on illicit drugs) were selected for this study.
Finally, data on alcohol consumption were collected using the CAGE (Cut down, Annoyed, Guilty, Eye-opener) validated questionnaire [28,29] to detect cases of alcohol dependence or abuse. Developed by Ewing [1984] [28] and validated by Mayfield et al. [ 1974] [30], it is characterized by its brevity, simplicity and ease of application. It comprises a total of 4 questions, which can be administered in the context of a clinical interview or in isolation. Each affirmative answer adds 1 point so that the existence of problems is evidenced when 2 or more questions are answered affirmatively. It has a sensitivity between 65–$100\%$ and a specificity of around 88–$100\%$ [31,32].
## 2.4. Data Analysis
A descriptive analysis of the variables described was carried out using the SPSS statistical package in version 26.0. The description of the variables analysed was carried out using measures of central tendency and dispersion, in the case of quantitative variables, mean and standard deviation (SD). Qualitative variables were expressed as absolute frequencies and percentages. The normality of the variables was tested using the Kolmogorov–Smirnov test with Lilliefors modification.
## 2.5. Ethical Considerations
This study was approved by the Research Ethics Committee of the Health Areas of León and Bierzo (registration number: 20205) and the Ethics Committee of the University of León (ETICA-ULE-044-2020). Likewise, it was designed in such a way as to respect the ethical principles for global medical research that are reflected in the Declaration of Helsinki and its subsequent modifications. Prior to participation in the study, it was specified through informed consent documents that participation was completely voluntary, making it clear that the exploitation of the registered data would be carried out completely anonymously and confidentially for research purposes.
## 3. Results
A total of 178 people participated, out of which $87.1\%$ [155] were women, with a mean age of 41.59 ± 10.9 years, with 22 being the minimum and 68 the maximum. Table 1 shows the descriptive data of the sample.
Figure 1 shows the graphic representation of each of the PSQI components. Thus, $59.6\%$ [106] of the healthcare workers presented sleep problems of greater or lesser importance. A total of $19.7\%$ [35] of participants showed optimal levels of sleep quality, measured by the PSQI. In Table 2, we can see the descriptive results of each of the PSQI components.
Descriptive data on eating behaviour according to the Three Factor Eating Questionnaire (R18) (TFEQ-R18) are described in Table 2 and Figure 2. The results of the dimensions of “uncontrolled intake” and “emotional eating” are highlighted as they were exceptionally high (74.45 ± 20.50 and 70.6 ± 25.78 out of 100, respectively).
Regarding the consumption of toxic substances, tobacco, alcohol and illicit drugs were considered for this study. About tobacco use, according to ESTUDES, the average age of onset is 16.23 ± 2.67 years, with 9 being the minimum age of onset and 29 being the maximum. Meanwhile, the average daily consumption was 10.56 ± 6.74 cigarettes (with a minimum of 1 and a maximum of 30 cigarettes per day with a frequency of consumption mainly daily in $25.28\%$ [45] of the participants but also sporadic, weekly in $5.06\%$ [9] and even less in $3.37\%$ [6].The most used illicit drugs were cannabis, used at some point by $88.37\%$ [38], followed by cocaine in $4.75\%$ [2] of the participants, ecstasy ($4.65\%$ [2] and, finally, amphetamines used at some point by $2.33\%$ of the respondents). On the other hand, regarding alcohol, $22.73\%$ [25] had increased its consumption during the pandemic, with beer and wine representing $87.2\%$ of the drinks consumed in this period. Cocktails ($3.67\%$), liquors ($0.92\%$) and other types of alcoholic substances ($8.26\%$) were also consumed. In Table 3, we can see more extensively the data referring to the consumption of tobacco, drugs and alcohol.
All respondents consumed alcohol. In terms of consumption pattern analysed using the CAGE questionnaire, we noted that $94.94\%$ [169] were social drinkers. Risk consumption was also observed by $2.25\%$ [4]. Likewise, the existence of harmful consumption was noted in $1.69\%$ [3] of the respondents and alcohol dependence in $1.12\%$ [2] of the participants.
## 4. Discussion
The literature has shown that the work dynamics of healthcare personnel generate high levels of stress and exhaustion, with important implications for health, both physical and mental [20,21,22,23,24], a situation that has been aggravated due to the COVID-19 pandemic [17]. This research has addressed aspects that interfere not only with the biopsychosocial development of the individual but also with the work quality of healthcare personnel during COVID-19. Thus, sleep quality, general and emotional eating behaviour and the consumption of alcohol, tobacco and illicit drugs have been studied.
Our study confirmed that the characteristics of healthcare professionals are similar to those of other studies carried out in care units [33,34]. The sociodemographic data show that $87.1\%$ of the sample were women with a mean age of 41.6 years, with the female gender being represented in more than $80\%$ of health-related jobs. Healthcare workers are mainly women with an average working age superior to 40 years [35]. In addition, $68.5\%$ turned out to be nursing professionals, making it the largest group among the employees surveyed. Indeed, nurses rank as the largest healthcare workforce, representing more than $50\%$ of the total healthcare workforce globally [35,36]. In spite of this, these figures are far from reaching the ideal levels to offer quality care since a greater number of nurses are needed to guarantee optimal health levels [37]. Along the same lines, the employment profile of the hired personnel has been described. The results of this study show that only $36\%$ of healthcare personnel have a tenure, which they obtained by passing an examination for Public Service, while $64\%$ have a temporary employment contract. However, it is true that the comparison with other countries can be somewhat complex as labour contracts vary depending on the health scenarios. Thus, a Brazilian study prior to the pandemic informs us that, indeed, most healthcare professionals were hired as service providers (temporary staff) [38]. In this line, the results of this work highlight the Spanish problem related to the shortage of healthcare personnel. Observed by public administrations and in an attempt to deal with this pandemic, staff from the different existing job placement offices and newly graduated professionals (mostly novices) were hired to help with the health emergency situation. The enormous spreading speed of COVID-19, added to the high number of infections among healthcare personnel, has forced these administrations to increase the supply of temporary jobs [39,40,41].
Regarding the quality of sleep, the results of this study showed that around $60\%$ of healthcare personnel have some sleep disorder, while around $20\%$ reflect optimal levels. This data is similar to those found in other comparable works, in which healthcare personnel are presented as a group with poor sleep quality, not only before the pandemic but also after [42,43]. Although the proportions obtained in this study are high, other investigations that evaluated the sleep quality among healthcare personnel during COVID-19 show even higher percentages. For example, in one of the most important studies carried out during the pandemic period, it was found that $75\%$ of healthcare workers had poor sleep quality [44]. Similarly, in Saudi Arabia, using the same questionnaire, a prevalence of sleep deprivation of $83\%$ was found among healthcare personnel [45]. If it were not for the pandemic period, this fact could be justified by the alteration of circadian rhythms derived from rotating shifts (usual shifts in care units), which in turn causes less stable sleep rhythms [46]. In the current context, the authors think it is important to point out that the high prevalence of sleep disorders found in this study, conducted one year after the start of the pandemic, may be related to concerns related to the contagious nature of COVID-19 and the work dynamics established in this period. On the other hand, when comparing the overall PSQI score of our work (7.92 ± 4.18) with other studies, we can see how our figures are lower than those obtained in other investigations. For example, in China, in a study carried out on 180 healthcare workers who worked during COVID-19, in addition to presenting a higher PSQI score (8.6 ± 4.6), they also showed high levels of stress and anxiety had a negative impact on sleep quality [47].
Furthermore, regarding emotional eating, this study observed that the sample presented a high level of emotional eating (the inability to resist emotional signals or eat in response to negative emotions) (70.6 ± 25.8). Likewise, the data obtained regarding uncontrolled intake (74.4 ± 20.5) (tendency to eat more than usual due to loss of control when eating with a subjective feeling of hunger) were high. In other studies, these data were obtained in response to negative or uncomfortable emotional states [48]. Regarding cognitive restriction (conscious restriction of eating aimed at controlling body weight and/or promoting weight loss), although the results are better compared to the other dimensions of the questionnaire (66.3 ± 16.3), they are also high. Job changes, job demands and uncertainty reflect how healthcare personnel “compensate with food” the negative emotions experienced in stressful situations [21].
In relation to tobacco consumption, our results show how one out of three healthcare professionals increased their consumption during this third wave. The published literature has already indicated that cigarette consumption increased in Spanish healthcare professionals during COVID-19 [49]. This may be motivated by the increase in cigarette consumption in the face of environmental stressors of various kinds, such as conflicts, disasters in the presence of depressive symptoms or post-traumatic stress disorders [50,51].
As for alcohol consumption, in this work, it is noted that one out of every four healthcare workers has increased consumption. Although such consumption has been associated with social life for decades, the pandemic has shown that its absence has not led to a reduction in alcohol consumption. In the healthcare environment, alcohol abuse or dependence may be associated with having been working as a healthcare personnel during the pandemic period and may again be related to situations that potentially generate post-traumatic stress disorders [50,51,52].
Here, we present the limitations and future lines of research. This study shows a series of limitations to highlight. The first is the sample size and having carried out this study in a single hospital. Furthermore, the difficulty in answering the Pittsburgh questionnaire can be considered another limitation. Indeed, our results show 37 people did not complete this instrument.
In future lines of study, we propose to expand this research with the inclusion of not only other variables of interest that have not been contemplated yet but also professionals from other centres from different parts of Spain and even Europe. It would also be interesting to include healthcare professionals from hospital centres as well as those dedicated to community health in the study sample.
## 5. Conclusions
The literature reports that the COVID-19 crisis has affected all types of healthcare workers, generating a significant emotional impact. The feeling of fear, anxiety or uncertainty, as well as the care overload or the pressure to which healthcare workers have been subjected during the pandemic, has considerably increased the consequences on their physical and psychological health. This study provides data on the quality of sleep and diet and the consumption of alcohol and tobacco of healthcare professionals in times of the pandemic. It is reflected in our results that the alterations already demonstrated in psychological health transcend other fields of the physical sphere of the individual and that they are basic for their functioning not only as workers but also as people. Since it is thought that stress levels could be the cause of such affectation, it would be interesting to implement strategies not only dedicated to the prevention and treatment of psychological consequences but also address fundamental aspects that favour healthy habits. |
# Clinical-Scale Mesenchymal Stem Cell-Derived Extracellular Vesicle Therapy for Wound Healing
## Abstract
We developed an extracellular vesicle (EV) bioprocessing platform for the scalable production of human Wharton’s jelly mesenchymal stem cell (MSC)-derived EVs. The effects of clinical-scale MSC-EV products on wound healing were tested in two different wound models: subcutaneous injection of EVs in a conventional full-thickness rat model and topical application of EVs using a sterile re-absorbable gelatin sponge in the chamber mouse model that was developed to prevent the contraction of wound areas. In vivo efficacy tests showed that treatment with MSC-EVs improved the recovery following wound injury, regardless of the type of wound model or mode of treatment. In vitro mechanistic studies using multiple cell lines involved in wound healing showed that EV therapy contributed to all stages of wound healing, such as anti-inflammation and proliferation/migration of keratinocytes, fibroblasts, and endothelial cells, to enhance wound re-epithelialization, extracellular matrix remodeling, and angiogenesis.
## 1. Introduction
Cutaneous wounds are common injuries caused by trauma, burns, ulcers, or surgery. Non-healing cutaneous wounds can impose severe clinical burdens on patients without effective treatment strategies. The beneficial effects of exogenous mesenchymal stem cells (MSCs) on wound healing have been observed in various animal models and clinical cases [1,2] Clinical test results using MSCs to enhance wound healing have been promising [3,4]. Notwithstanding the promising results obtained in clinical trials, MSC-based therapies are not considered a standard of care in clinical settings due to various limitations to their applicability [5,6].
A cell-free treatment paradigm using MSC-derived extracellular vesicles (EVs) can avoid the cell-related problems associated with stem cell therapy and exert the paracrine actions of MSCs. In addition, the “off-the-shelf” use of allogeneic MSC-derived EVs from healthy and young stem cells, such as MSCs derived from the umbilical cord, has the advantage of scalable production and storage with standardized procedures with high restorative capacity. However, critical hurdles remain in the translation of MSC-EVs into clinical therapeutics. Previous studies have used EV preparations obtained from the conventional 2D culture of MSCs; however, to date, no preclinical or clinical studies have examined the effects of MSC-EVs via scale-up production with customized therapeutic properties. We have previously reported that MSCs 3D-cultured as size-controlled cellular aggregates on a large scale better preserved the innate phenotype and properties of MSCs compared to 2D monolayer cultures, which resulted in the significantly augmented secretion of therapeutic MSC-derived EVs and their therapeutic contents (miRNAs and cytokines) from MSCs compared to conventional 2D cultures [7].
In the present study, we hypothesized that a clinical-scale EV product using a 3D micropatterned well system would enhance the wound healing process. To verify this, we developed an EV-bioprocessing platform designed using a cell non-adhesive microwell-patterned array for the scalable production of human Wharton’s jelly (WJ)-MSC-derived EVs in serum-free media. The effects of clinical-scale EV products on wound healing were tested in two different wound models: subcutaneous injection of EVs in a conventional full-thickness rat model and topical application of EVs using a sterile re-absorbable gelatin sponge in a chamber mouse model that was developed to prevent the contraction of wound areas. In addition, we performed in vitro and in vivo mechanistic studies using multiple cell lines involved in the wound healing process.
## 2.1. EV Characterisation
The amount of EVs obtained from 3D culture system was estimated to be approximately 8155.28 EVs per cell. The EVs had a typical round shape as seen on electron microscopy (TEM and Cryo-EM) (Figure 1A), and the mean particle diameter was 146.0 nm (Figure 1B). We investigated the expression of CD9, CD63 and CD81 using the Exoview Tetraspanin kit. EVs were primarily captured by antibodies against each tetraspanin, and then fluorescently labeled by detection antibodies for the three tetraspanins. It was demonstrated that the subpopulation of CD63+ was higher than CD9+ or CD81+ (Figure 1C). The presence of EV-specific positive markers (CD63, CD81 and Syntenin-1) further confirmed the identity as EVs (Figure 1D). The particle/protein ratio was 6.5 × 108 particles/μg. Specific contaminating proteins, including histone H2A.Z, GM130, and antibiotics, were identified by Western blot or ELISA. Antibiotics GM130 and histone H2A.Z were not detected (Figure 1E). The characteristics of EVs and their cargo contents did not change at room temperature after 1 week (Figure 1F).
## 2.2. MSC-EVs Induce Re-Epithelialization in Both Types of Wound Models
To investigate the efficacy and mechanism of MSC-EVs in a full-thickness rat wound model, rats were induced into a full-thickness wound model, and 2 × 108 EVs/rat were injected subcutaneously for 3 d (Figure 2A). Wound closure in the MSC-EVs group was higher than that in the PBS-treated group (Figure 2B,C). In addition, 14 d after wound induction, the contractility and repair ability of the wound center were measured, and the percentage of re-epithelialization was analyzed for wound repair capacity (Figure 2D,E). MSC-EV treatment significantly increased re-epithelization (Figure 2D) and reduced the size of the wound area compared with the controls (Figure 2E).
In addition, we tested the effects of EVs in a mouse chamber wound model because, unlike in humans, rodent skin wounds contract soon after wound formation (Figure 2F). In the chamber model, topical application of EVs using a sterile re-absorbable gelatin sponge (Cutanplast) in the chamber mouse model induced wound closure (Figure 2G,H) and improved re-epithelialization and granulation tissue in the chamber (Figure 2I,J). MSC-EVs induced a significant reduction in the size of the wound areas (%) in the chamber, strengthened the newly formed epidermal layer, and promoted the production of granulation tissue in the chamber.
## 2.3. MSC-EVs Accelerate Wound Healing by Promoting the Migration of Keratinocytes
MSC-EVs stimulated epithelial regeneration in both wound models. MSC-EVs promoted hypertrophy of the epithelial cell layer after 3 d of treatment with EVs (Figure 3A,B). Immunohistochemical examination 7 d after wounding showed that the number of keratinocytes was increased in the epithelial cell layer, suggesting that MSC-EVs promote the proliferation of keratinocytes for re-epithelization (Figure 3E,F). Interestingly, the epithelial cell layers returned to normal thickness after 2 weeks of MSC-EV treatment (Figure 3C,D), suggesting that EV-mediated regeneration of the epidermis occurs mainly during the initial phase of wound healing and the remodeling of the scar tissue maturation phase. In a study of wound tissue treated with MSC-EV, it was observed that the skin tissue underwent stabilization and thinning during the maturation stage. Immunohistochemistry for keratin 14 (a marker of keratinocyte cells) and Ki-67 (a marker of proliferating cells) showed that MSC-EVs stimulated the proliferation and migration of keratinocytes (Figure 3E,F).
## 2.4. MSC-EVs Promote the Migration of Mature Fibroblasts into the Granulation Tissue
MSC-EV therapy stimulated the proliferation of fibroblasts to promote the maturation of granulation tissue in both the full-thickness and chamber wound models (Figure 4). Treatment with MSC-EVs increased the number of proliferating fibroblasts that were positive for both Ki67 and vimentin (a marker of fibroblast cells) in immunological staining (Figure 4B,E). In addition, the migration of proliferating fibroblasts to the granulation tissue was increased after treatment with MSC-EVs, from subcutaneous areas in the chamber model and from the non-injured regions in the full-thickness model (Figure 4A,D).
## 2.5. MSC-EVs Promote the Formation of New Blood Vessels in the Wound Area
Immunohistochemical staining for CD31 (a marker of vascular structure) showed that MSC-EVs enhanced the vascular structure in both the epithelial cell layer and the wound center region during the wound healing process (Figure 5A). Similarly, immunohistochemical staining for a vascular endothelial growth factor (VEGF, a blood vessel marker) showed that MSC-EVs promoted angiogenesis (Figure 5C). We also measured the tissue levels of pro-angiogenic growth factors and found that VEGF, angiopoietin (Anpt)-1, and Anpt-2 levels were significantly increased in tissue lysates obtained from the dorsal wound area in the EV group compared to those in the control group (Figure 5E–G).
## 2.6. In Vitro Assay for MSC-EV Effects on Four Major Cell Types, Fibroblasts, Keratocytes, Endothelial Cells, and Inflammatory Cells
We performed in vitro studies to investigate the mechanisms of MSC-EVs using multiple cell lines involved in the wound healing process: keratinocytes (HaCaT), fibroblasts (NIH-3T3), endothelial cells (HUVECs), and inflammatory cells (RAW264.7). For both NIH-3T3 and HaCaT cells, cell motility was assessed using a scratch wound model. Various MSC-EVs (2, 5, and 10 × 108 EVs) were administered for 24 h (Figure 6A,B). MSC-EVs promoted the proliferation of both keratinocytes and fibroblasts, although the maximal effective dose was lower in fibroblasts than in keratinocytes. The tube formation assay using HUVECs showed a dose-dependent increase in angiogenesis (Figure 6C). Lastly, inflammation-induced macrophage RAW264.7 cells were tested using the Griess reagent for NO production (Figure 6D). Treatment with MSC-EVs promoted the polarization of M2-type macrophages (Figure 6E). In addition, compared to the control group, the levels of inflammatory cytokines were significantly decreased, but the levels of anti-inflammatory cytokines (IL-10) were increased in the EV group (Figure 6F).
## 3. Discussion
This study is the first to show that clinical-scale EV therapeutics are feasible using a micro-patterned well system and can improve the wound healing process. In this study, the effects of EV treatment were tested in different wound injury models under different treatment modes, which showed consistent findings. The mechanisms of action of MSC-EVs were assessed using both in vivo and in vitro models. The therapeutic potential of EVs can contribute to multiple stages of wound healing, such as cell proliferation and differentiation, inflammation, angiogenesis, and extracellular matrix remodeling. Specifically, our clinical-scale EV therapeutics could effectively induce the proliferation and migration of endothelial cells, keratinocytes, and fibroblasts to improve angiogenesis and re-epithelialization and regulate inflammatory cells in rodent wound models.
To date, multiple studies have investigated the effects of stem cell-derived EVs in wound models [8,9,10,11,12,13,14,15,16,17,18]. MSC-EV therapies obtained from various MSC sources, such as bone marrow, adipose tissue, and umbilical cord, have been used to improve recovery in various wound models. However, the development of MSC-EV therapeutics faces several hurdles, including establishing a consistent, scalable cell source and developing robust GMP-compliant upstream and downstream manufacturing processes [19]. MSCs undergo senescence, and their intrinsic ability to secrete EVs significantly declines in conventional 2D cultures; therefore, MSC-EV preparations may differ in their therapeutic potential. In addition, according to the US FDA conversion guideline documents for industry estimating the maximum safe starting dose in adult healthy volunteers (July 2005), one patient in clinical testing requires more than 100 times higher doses than those of one mouse or rat. Low output limits of EV preparations obtained from the conventional 2D culture of MSCs limit the clinical application of EVs. EVs obtained under 3D cultures, such as micro-patterned well systems, as shown in the present study, hollow fiber bioreactor-based 3D culture systems, and 3D scaffolds cultures, exhibited enhanced EV yield and a heightened damage-repair ability [20,21]. Therefore, for effective clinical-scale production of therapeutic EVs, large batches of MSCs are needed, which significantly affects the labor, time, and cost of production. In this study, we established a cell bank, used the 3D culture method, and the combination of filter and TFF system, as it allowed the large-scale production of EVs (the yield of EVs is more than 10–20 fold that of conventional 2D culture) without the use of serum. Compared to conventional stem cell-based therapeutics, our EV therapy has potential benefits in terms of cost-effectiveness when WJ-MSCs are cultured in a 3D micropatterned well system and isolated using a TFF system (Supplementary Figure S2). More importantly, our scalable 3D-bioprocessing EV production method reduced the donor/batch variation. Lastly, our small RNA sequencing data revealed that MSC-EVs containing miRNAs played important roles in angiogenesis, cytoprotection, immune modulation, and rejuvenation, and miRNAs, such as miR-21-3p, miR-125a, and miR-126-3p, were involved in the wound healing process after treatment with MSC-EVs (Supplementary Figure S3) [8,9,10,14,22,23]. MSC-EVs treatment has been found to promote wound healing by increasing the expression of VEGF-A, Wnt, and PI3K/AKT in fibroblast and keratinocyte cells. These findings suggest that EV-contained miRNA and cargo play a key role in wound healing by regulating specific signaling pathways, but more research is needed to fully understand the mechanism and potential therapeutic applications of MSC-EVs in wound healing (Supplementary Figures S3 and S4) [8].
In this study, the effects of MSC-EV treatment were tested in different species (mouse and rat) and wound models (mild [traditional full-thickness model] and severe [chamber model]), which showed consistent therapeutic benefits. The chamber model prevents the migration of keratinocytes into the wound and the closure of the wound via contraction [24]. It facilitates the de novo generation of epithelial tissues from the surface of the skin ulcers. Our results suggest that the application of EVs stimulates wound-resident stem cells to promote the wound-healing process; however, further studies are required to evaluate the de novo generation of epithelial tissues from wounded tissues [24].
Wound healing is classically divided into four stages: hemostasis, inflammation, proliferation, and remodeling. Each stage is characterized by key molecular and cellular events and is coordinated by a host of secreted factors that are recognized and released by the cells of the wounding response [25]. As various cellular components are involved at different stages of the wound healing process, we performed an in vitro assay to determine EV effects on four major cell types: fibroblasts, keratocytes, endothelial cells, and inflammatory cells. Depending on the severity and chronology (time interval from the onset of wound injury) of the wound and the presence of any comorbidities, such as infection and diabetes mellitus, in patients, one stage may be more prominent than others, and the target of treatment could be different among patients. For example, therapies with anti-inflammatory effects are needed in the inflammatory phase, the first phase after the cutaneous wound, while enhancing angiogenesis can be an important strategy in patients with diabetes mellitus. Proliferation and remodeling are important targets for the treatment of chronic deep wounds. The in vitro assay can aid in assessing the targets for different wound healing treatments. The results of this study showed that MSC-EV therapeutics exert their effects in most phases of wound healing.
This study has several limitations. First, the molecular action mechanisms of MSC-EVs could not be investigated. Of the cargo in exosomes, miRNAs are of prime importance in mediating the therapeutic effects on wound healing [8,9,10,11]. Molecular pathways of EV-miRNAs involved in wound healing are under investigation. In addition, we studied the effects of MSC-EVs in healthy young mice and rats. Cutaneous wounds are difficult to heal in older patients and those with comorbidities, especially diabetes mellitus. We are currently investigating the effects of MSC-EVs in diabetic wound animal models. Lastly, further in vivo studies are needed to determine the dose-responsiveness and optimal dose of EVs based on the specific phase of wound healing, as the optimal doses for angiogenesis and proliferation of keratinocytes and fibroblasts were different in our in vitro studies.
In conclusion, the present study demonstrated that our scalable 3D-bioprocessing production method is feasible for clinical-scale MSC-EV therapy. Moreover, our results showed that MSC-EVs promote wound healing in both mild and severe injuries via the regulation of various wound-healing phases.
## 4. Materials and Methods
All studies involving human subjects were approved by the Institutional Review Board of Samsung Medical Center. WJ was provided to the healthy volunteers. All volunteers or their guardians provided written informed consent to participate in the study. All experimental animal procedures were approved by the Institutional Animal Care and Use Committee (Laboratory Animal Research Center, AAALAC International approved facility) of Samsung Medical Center.
## 4.1. Preparation of EV-Three-Dimensional (3D) Spheroid Cultures of WJ-MSCs
MSCs derived from human WJ of the umbilical cord (WJ-MSCs) were culture expanded at passage five with growth medium in a $5\%$ CO2 incubator at 37 °C. WJ-MSCs were used at passage six to generate 3D spheroid cultures. WJ-MSCs were seeded into a micro-patterned well system (EZSPHERE; ReproCELL Inc., Tokyo, Japan), washed with phosphate-buffered saline (PBS), and trypsinized using TrypLE Express (GIBCO, NY, USA). After the WJ-MSCs were centrifuged, a fresh serum-free medium without heterologous proteins was added, and the cells were counted using a hemocytometer. After cell counting, 60 mL of the cell suspension was placed in a microarray containing approximately 69,000 microwells, each with a diameter and depth of 500 μm × 200 μm coated with 2-methacryloyloxyethyl phosphorylcholine polymer at a density of 400 cells/well. For the 3D spheroid culture of WJ-MSC, serum-free medium (α-minimal essential medium) was used, without any antibiotic. A 3D spheroidal cell aggregate was prepared by inducing spontaneous spheroidal cell aggregate formation while maintaining a static state by dispensing uniformly and culturing in a CO2 incubator at 37 °C for 4 d.
## 4.2. Isolation of EVs
EV isolation was performed in a biological safety cabinet. The culture medium was collected via gentle pipetting at the top of each well. To remove the cell debris and apoptotic bodies, 1800 mL of culture medium was centrifuged at 2500× g for 10 min, followed by filtration through a 0.22-μm membrane. The filtered medium was separated using a 300-kDa MWCO mPES hollow fiber MiniKros filter module (Spectrum Laboratories, Rancho Dominguez, CA, USA) on a commercially available KrosFlo KR2I tangential flow filtration (TFF) system (Spectrum Laboratories, Rancho Dominguez, CA, USA), which facilitates the large-scale processing of samples. EV-containing samples were recirculated into a filtration bottle. Small molecules, including free proteins, were passed through the membrane pores, eluted as a permeate, and collected. The collected solution was used as the secretome. EVs were maintained in circulation as retentate and concentrated in the bag. We conducted five volume exchanges of EVs with PBS, and EVs were subsequently concentrated to a final volume of 300 mL of recovery solution (PBS). The recovered solution was filtered through a 0.22-μm membrane. After harvesting the conditioned media, the EV isolation process was started immediately using the TFF procedure.
All processes were performed according to the guidelines on quality, non-clinical, and clinical assessment of EV therapy products of the Korean Food and Drug Administration (FDA, released December 2018) using good manufacturing practice (GMP)-compliant methods. Schematics of the processes of EV production, isolation, and quality control are shown in Supplementary Figure S1.
## 4.3. Characterization of EVs
Following the guidelines recommended by the International Society for Extracellular Vesicles (Minimal Information for Studies of Extracellular Vesicles 2018) and the Korean FDA, EVs isolated from the WJ-MSC culture medium were characterized in terms of their morphology, size distribution, surface markers, purity, potency markers, efficacy, stability, and safety [26].
See the Supplementary detailed methods for nanoparticle tracking analysis, Western blotting, transmission electron microscopy (TEM), enzyme-linked immunosorbent assay (ELISA), Exoview analysis, quantitative reverse transcription-polymerase chain reaction, and small RNA sequencing.
## 4.4. Two Animal Models of Cutaneous Wound
All animal experiments were approved by the Institutional Animal Care and Use Committee of Samsung Biomedical Research Institute and performed in accordance with the Institute of Laboratory Animal Resources guidelines. All animals were maintained in compliance with the relevant laws and institutional guidelines of the Laboratory Animal Research Center (AAALAC International-approved facility) at Samsung Medical Center.
## 4.4.1. Conventional Full-Thickness Skin Wound Rat Model
A conventional full-thickness cutaneous wound model was used in this study. Briefly, excisional wounds were created using an 8 mm diameter punch (Acuderm, Inc., Ft. Lauderdale, FL, USA) on the shaved dorsal skin under ketamine (100 mg/kg) and xylazine hydrochloride (5 mg/kg) anesthesia. Silicone splints were fixed around the excised wound. EVs were injected subcutaneously at four different points around the wounds, while an equal volume of PBS was injected subcutaneously in the same position in the control group rats. Based on the results of our preliminary experiments, a dose of 2 × 108 EVs/rat was selected for further experiments using the rat model.
## 4.4.2. Mouse Chamber Wound Model
Unlike human skin, rodent skin has panniculus carnosus, a thin layer of muscle attached to the subcutaneous tissue that acts as a contractile force for wound closure. Therefore, in the full-thickness rat model, it was difficult to measure the regeneration and recovery mechanisms of skin epithelial cells because of rapid wound healing by contraction. Therefore, we tested the effects of EVs in a mouse chamber model [24,27].
We surgically removed the skin from the back of the mice to generate an ulcer and isolated the resulting wound from the surrounding skin using a skin chamber sutured to the deep fascia. A chamber-made EP tube was placed inside the skin layer and fixed to the skin layer by a simple suture. Since mice are half as small as rats based on their body surface area, a dose of 1 × 108 EVs/mouse was selected for the mouse model and applied for 3 d after a full-thickness excision wound. Cutanplast was moistened with EVs and placed inside the chamber. To prevent inflammation in the chamber, antibiotics (Baytril) were injected for 2 weeks after surgery.
## 4.5. Measurement of Wound Contraction
Measurements of wound contraction and wound closure were performed using surgical calipers, and the wound areas were quantified using Aperio Image Scope V 12 software. Wounds were photographed on days 0, 1, 3, 5, 7, 10, 14, and 21 post-wounding, and wound size was determined using the ImageJ software (National Institutes of Health, Bethesda, MD, USA) to measure the wound area. The percentage of wound closure was calculated using the following equation: Wound closure=Initial wound size−Specific day wound sizeInitial wound size×100 Using histological samples, the general linear model for the determination of time versus wound closure (re-epithelialization) and granulation tissue formation for each treatment was evaluated. Wound contraction was calculated as a percentage of the original wound size, taken as $100\%$ of each animal in the group using the equation given above. The percentage of wound area was calculated using the following formula:*Wound area* (%)=Area at biopsyArea on incision day×100
## 4.6.1. Histological Analysis
Skin tissue samples were fixed in $4\%$ paraformaldehyde for 24 h and underwent dehydration with graded ethanol. The samples were then embedded in an optimal cutting temperature compound and cut into 10–30-μm thick sections. Hematoxylin and eosin (H&E) staining was performed using commercial staining kits (H&E Staining Kit (ab245880), Abcam, Cambridge, UK)), according to the manufacturer’s instructions. Images were captured using a microscope (ScanScope image, USA).
## 4.6.2. Immunohistochemistry
After 15 d of induction of wound models, the effect of MSC-EVs was compared with that of the control (basal medium) by immunostaining with Ki-67 (a cell proliferation marker) and vimentin (a fibroblast marker), according to the manufacturer’s instructions. Dorsal skin tissues were fixed in $4\%$ paraformaldehyde and blocked with $10\%$ normal goat serum. Dorsal skin was incubated overnight at 4 °C with rabbit anti-Ki-67 (1:50; Abcam, UK) and goat anti-vimentin (1:500; Abcam, UK) antibodies. The cells were then washed with PBS and incubated with secondary DyLight-labeled anti-goat IgG (1:200, 594 nm; Abcam, UK) and DyLight-labeled anti-rabbit IgG (1:200, 488 nm; Vector Laboratories, Burlingame, CA, USA) antibodies. Samples were imaged using a fluorescence microscope (EVOS; Advanced Microscopy Group, Bothell, WA, USA), and positively stained cells were quantified using ImageJ software.
## 4.6.3. Measurement of Cytokine Levels via ELISA
ELISA was performed using commercial kits according to the manufacturer’s instructions. The following ELISA kits were used: tumor necrosis factor-α (MBS140025, MyBioSource, San Diego, CA, USA), Ang-1 (MBS2601637, MyBioSource, San Diego, CA, USA), Ang-2 (MBS8420366, MyBioSource, San Diego, CA, USA), interleukin (IL)-10 (MBS140013, MyBioSource, San Diego, CA, USA), IL-6 (MBS 824703, MyBioSource, San Diego, CA, USA) and IL-beta (MBS 175967, MyBioSource, San Diego, CA, USA). All kits included standard proteins; therefore, the amount of protein and EV counts were determined based on the standard curve from each kit.
## 4.7.1. Measurement of Nitric Oxide Production in RAW264.7 Cells
The level of NO was determined by measuring the quantity of nitrite in the supernatant using the Griess reaction. Macrophage RAW264.7 cells (1.0 × 105) were seeded into a 24-well plate and treated with lipopolysaccharide (LPS; 100 ng/mL) for 24 h. To measure the amount of NO produced, 50 μL of conditioned medium was mixed with an equal volume of Griess reagent (Sigma, Saint Louis, MO, USA) and incubated for 15 min at room temperature. Absorbance was measured at 540 nm using a microplate reader, and the absorbance versus sodium nitrite concentration plot was constructed.
## 4.7.2. Fibroblast Wound Healing Assay in NIH-3T3 Cells
NIH-3T3 cells were seeded at 1.8 × 105/well into a 12-well plate. The wells were then scratched longitudinally using a yellow tip. After washing twice with high glucose media, cultures were treated with the same medium containing 5 μg/mL mitomycin C (Sigma, Saint Louis, MO) with or without MSC-EVs (2, 5, and 10 × 108 /mL). Cell migration was assayed 24 h after MSC-EV treatment using optical microscopy. Wound areas were measured using the ImageJ software, and the percentage of cell motility was calculated using the following equation: ([Area at 0 h − Area at 12 h]/Area at 0 h) × 100.
## 4.7.3. Keratinocyte Wound Healing Assay in HaCaT Cells
HaCaT cells were seeded at 2.2 × 105/well into a 12-well plate. The experimental procedure was the same way as the one used in the NIH-3T3 fibroblast wound-healing assay.
## 4.7.4. Angiogenesis Assay in Human Umbilical Vein Endothelial Cells
In vitro capillary network formation was determined using a tube formation assay on Matrigel (354248; Corning, Glendale, AZ, USA). Human umbilical vein endothelial cells (HUVECs) (1.5 × 104 cells/mL) were seeded onto Matrigel-coated wells of a 96-well plate and cultured in $1\%$ fetal bovine serum-supplemented Dulbecco’s Modified Eagle’s medium (10567014; Gibco, Waltham, MA USA) in the presence of 5 × 108/mL MSC-EVs or PBS. Tube formation was observed using an inverted microscope (Leica DMi8, Wetzlar, Germany). The number of network structures was quantified by randomly selecting five fields per well using ImageJ software.
## 4.8. Statistical Analyses
Statistical analyses were conducted using the SPSS program (SPSS Statistics Version 24.0, IBM Corp, Armonk, NY, USA) and GraphPad Prism 9 software (GraphPad Software, San Diego, CA, USA). The normality of the data was evaluated using the D’Agostino–Pearson test. One- and two-way analyses of variance with Tukey’s multiple comparison tests were used to analyze the three groups. Student’s t-test and Wilcoxon–Mann–Whitney test were used for paired and unpaired analyses of the two groups. Statistical analysis results are indicated in the figure legends. Results are expressed as the mean ± standard error. Statistical significance was defined as $p \leq 0.05.$ |
# A Nucleus Accumbens Tac1 Neural Circuit Regulates Avoidance Responses to Aversive Stimuli
## Abstract
Neural circuits that control aversion are essential for motivational regulation and survival in animals. The nucleus accumbens (NAc) plays an important role in predicting aversive events and translating motivations into actions. However, the NAc circuits that mediate aversive behaviors remain elusive. Here, we report that tachykinin precursor 1 (Tac1) neurons in the NAc medial shell regulate avoidance responses to aversive stimuli. We show that NAcTac1 neurons project to the lateral hypothalamic area (LH) and that the NAcTac1→LH pathway contributes to avoidance responses. Moreover, the medial prefrontal cortex (mPFC) sends excitatory inputs to the NAc, and this circuit is involved in the regulation of avoidance responses to aversive stimuli. Overall, our study reveals a discrete NAc Tac1 circuit that senses aversive stimuli and drives avoidance behaviors.
## 1. Introduction
Reward and aversion are critical for motivated behaviors and are associated with many mood disorders. Unexpected stimuli and threats drive aversive behaviors, an innate response crucial to the survival of animals [1]. Aversive stimuli engage negative emotions and contribute to prominent psychiatric disorders. Enormous advances have been made in understanding the neural circuits underlying reward [2,3,4,5,6,7]. However, the neural circuits underlying aversion remain elusive.
It is widely thought that the nucleus accumbens (NAc) is a critical brain region in the reward and aversion circuits that integrate different inputs, leading to motivated behaviors [8,9,10,11,12,13,14]. Anatomically, the NAc can be divided into the core, lateral shell, and medial shell [15]. It has been found that distinct NAc neural circuits are involved in different brain functions [16,17,18,19,20,21]. Dopamine transmissions from the ventral tegmental area have been linked to reward and aversion processing [18]. Glutamatergic inputs from the thalamic paraventricular nucleus to the NAc regulate aversion [19,22]. How the NAc regulates opposite behaviors at the same time remains elusive. Thus, it is worth investigating whether distinct NAc subregions are included in discrete neural circuits involved in aversion.
The major projection neurons in the NAc are medium spiny neurons (MSNs), distinguished by their dopamine receptor expression (D1-MSNs and D2-MSNs) [23,24,25]. Markers for D1-MSNs and D2-MSNs also include the expression of different peptides [26,27]. Substance P, the major peptide encoded by tachykinin precursor 1 gene (TAC1), and dynorphin are exclusively expressed in D1-MSNs [28,29,30]. Previous work demonstrated that dynorphin-containing neurons in the NAc mediate negative affective states [16,31,32]. This raises the possibility that Tac1 neurons in the NAc medial shell may be involved in the regulation of aversion.
The NAc has received attention as a crucial convergence point of reward and aversion circuits, as it receives multiple projections from the ventral tegmental area (VTA), medial prefrontal cortex (mPFC), basolateral amygdala (BLA), and hippocampus [33,34]. The mPFC is strongly related to neural circuits encoding aversion and decision making [35,36,37]. The prelimbic and infralimbic regions of the mPFC have been implicated in aversion [38,39,40]. However, studies have yielded conflicting findings. How the mPFC regulates aversion through specific neural circuits remains underexplored.
Here, we show that tachykinin precursor 1 (Tac1) neurons in the NAc medial shell mediate avoidance responses to aversive stimuli. Neural tracing and electrophysiological data show that NAcTac1 neurons project inhibitory signals to the lateral hypothalamic area (LH) and modulate avoidance behavior in the presence of aversive stimuli. Additionally, neurons in the NAc medial shell receive inputs from mPFC glutamatergic (mPFCGlut) neurons, and optogenetic manipulation of the mPFCGlut → NAc circuit regulates aversive behaviors. These results indicate the essential role of Tac1 neurons in encoding aversive stimuli and regulating behavioral responses.
## 2.1. NAcTac1 Neurons Regulate Avoidance Behavior in Response to Aversive Stimuli
To investigate the expression of Tac1 neurons in the NAc, we crossed the Tac1-internal ribosome entry site 2 (IRES2)-Cre mouse line [41] with a Cre-dependent tdTomato reporter line, Ai9 [42] (Figure 1A). We observed that Tac1-tdTomato cellular expression closely matched endogenous substance P and Dopamine Receptor 1 (Figure 1B–G). To mimic aversion in mice, Tac1-Cre male mice were given an injection of formalin in the plantar surface of a hindpaw, as previously described [43,44]. Patch-clamp recordings were performed on Tac1 neurons in the NAc (Figure 1H). We observed decreased excitability of Tac1 neurons in the medial shell, but not in the lateral shell (Figure 1I,J and Figure S1A,B). These data indicate that Tac1 neurons in the NAc medial shell are involved in the circuit regulating aversion.
To determine whether NAcTac1 neurons in the NAc medial shell regulate aversive behaviors, we performed chemogenetics using designer receptors exclusively activated by designer drugs (DREADDS). To selectively manipulate the activity of Tac1 neurons, we bilaterally injected AAV-DIO-hM3D(Gq)-mCherry, AAV-DIO-hM4D(Gi)-mCherry, and AAV-DIO-mCherry into the NAc medial shell of Tac1-Cre male mice (Figure 1K). Formaldehyde has been shown to act as an unfamiliar aversive stimulus for rodents without altering their motor activity [45]. We thus measured the approach-avoidance behaviors of male mice to an aversive stimulus (formaldehyde) while inhibiting or activating the activity of NAcTac1 neurons. A piece of cotton dipped in $5\%$ formaldehyde was placed on one side of a three-chamber arena. Mice tend to explore a novel object, but animals display strong avoidance behaviors when exposed to formaldehyde. Mice were introduced into the chamber containing formaldehyde. Interactions with formaldehyde were recorded for 5 min. The opposite chambers of the arena were designated the ‘safe’ area and ‘center’ area. We observed that hM3D(Gq)-injected mice spent significantly more time exploring the aversive stimulus than hM4D(Gi)- and mCherry (control)-injected mice (Figure 1L–O). To investigate whether the activity of Tac1 neurons regulates interactions with a neutral stimulus in mice, we performed the approach experiment and replaced the piece of $5\%$ formaldehyde cotton with a piece of regular cotton. We found that the activity of NAc Tac1 neurons did not affect time spent interacting with neural stimuli (Figure S2A–C). Moreover, olfaction and locomotion were not affected by hM4D(Gi) or hM3D(Gq) injection (Figure S3A,B). These results suggest that NAcTac1 neurons in the medial shell are crucial to avoidance behaviors in response to aversive stimuli.
## 2.2. NAcTac1 Neurons Project to the LH
To identify possible downstream targets of NAcTac1 neurons that may encode aversive stimuli, we injected AAV-DIO-mCherry into the NAc medial shell of Tac1-Cre mice. Four weeks later, the animals were euthanized, and the distribution of neurons that NAcTac1 neurons target in the brain was examined (Figure 2A–C). The whole-brain mapping results indicated that dense mCherry-labeled terminals were found in the lateral hypothalamic area (LH) (Figure 2D–F, Figure S4).
We next assessed the synaptic function of NAcTac1 neurons projecting to the LH. We first expressed channel rhodopsin-2 (ChR2) in NAcTac1 neurons, and then, selectively activated the terminals of NAcTac1 neurons in the LH via optogenetic stimulation (5 ms pulse, 20 Hz) (Figure 2G,H). In the whole-cell patch-clamp configuration, inhibitory postsynaptic currents (IPSCs) were recorded in 17 out of 42 LH neurons (Figure 2I). However, no excitatory postsynaptic currents were recorded. IPSCs were eliminated via pretreatment with the GABA-A receptor antagonist bicuculline (Figure 2J,K, Figure S5). These results suggest that NAcTac1 neurons send inhibitory inputs to the LH.
## 2.3. NAcTac1-to-LH Projection Mediates Avoidance Behaviors in Response to Aversive Stimuli
To assess whether the NAcTac1→LH circuit regulates avoidance responses to aversive stimuli. Male Tac1-Cre mice were unilaterally injected with AAV-DIO-mCherry and AAV-DIO-ChR2-mCherry (Figure 3A,B). Six weeks later, we carried out an approach-avoidance assay for evaluation (Figure 3C). A piece of cotton dipped in $5\%$ formaldehyde was placed in one corner of a square chamber. Mice were introduced into the chamber. Their interactions with formaldehyde were recorded. Compared with the control stimulation, the selective delivery of blue light (5 ms pulse, 20 Hz for 5 min) to the LH of ChR2-expressing terminals elicited a significant increase in interaction time with formaldehyde (Figure 3D,E). We also calculated the total distance traveled by the mice in the arena and found that locomotion was not affected (Figure 3F).
We then selectively inhibited the NAcTac1 terminals in the LH by delivering continuous yellow light to the LH of male mice bilaterally infected with AAV-DIO-NpHR-eYFP in the NAc medial shell (Figure 3G,H). In the approach-avoidance assay, the photoinhibition of NAcTac1→LH projection significantly decreased interaction time with formaldehyde without affecting locomotion (Figure 3I–L). We also carried out a real-time place aversion (RTPA) assay and found that the photoinhibition of NAcTac1→LH projection elicited avoidance of the photoinhibition-paired chamber (Figure S6A–D). Taken together, these data indicate that the NAcTac1→LH circuit is crucial to avoidance behaviors in response to aversive stimuli.
## 2.4. mPFCGlut Inputs Activate NAc Neurons
Next, we sought to identify upstream brain regions of NAcTac1 neurons that might mediate aversive behaviors. We employed a monosynaptic viral tracing strategy in Tac1-Cre mice. The NAc medial shell of Tac1-Cre mice was injected with AAV-DIO-RVG and AAV-DIO-TVA-GFP. Four weeks later, RV-EnVA-dsRed was injected into the LH (Figure 4A–C). We found that the medial prefrontal cortex (mPFC) was projected to NAcTac1 neurons (Figure 4D). Based on emerging studies [37,38,46,47,48] showing that the mPFC is critical for neural circuits of aversion, we focused on neurons in the mPFC projecting to NAcTac1 neurons. To determine the kinds of mPFC neuron that are involved in the NAcTac1 circuit, we performed immunofluorescence experiments and found that mCherry-labeled neurons co-expressed the glutamatergic marker VGLUT2 (Figure 4E and Figure S7).
We next evaluated the synaptic function of mPFCGlut neurons projecting to NAc neurons. We first expressed ChR2 in mPFCGlut neurons by injecting AAV-CaMKIIα-ChR2-eYFP into the mPFC, and then, selectively activated NAc neurons that were receiving projections from mPFCGlut neurons via optogenetic stimulation (5 ms pulse, 20 Hz) (Figure 4F,G). In the whole-cell patch-clamp configuration, excitatory postsynaptic currents (EPSCs) were recorded in 32 out of 60 NAc neurons (Figure 4H). However, no inhibitory postsynaptic currents were recorded. EPSCs were eliminated via pretreatment with the AMPA receptor antagonist CNQX (Figure 4I,J and Figure S8). To identify whether Tac1 neurons in the NAc medial shell received inputs from the mPFCGlut, we expressed ChR2 in mPFCGlut neurons in Tac1-Cre; Ai9 mice. In the patch-clamp recording, EPSCs were recorded in Tac1 neurons in the NAc medial shell (Figure S9A–D). These results suggest that mPFCGlut neurons project excitatory signals to Tac1 neurons in the NAc medial shell.
## 2.5. The mPFCGlut-to-NAc Circuit Modulates Avoidance Behaviors in Response to Aversive Stimuli
To investigate whether activation of the mPFCGlut →NAc circuit decreases avoidance behaviors in response to aversive stimuli. Male mice were unilaterally injected with AAV-CaMKIIα-eYFP and AAV-CaMKIIα-ChR2-eYFP (Figure 5A,B). Six weeks later, we carried out an approach-avoidance assay for evaluation (Figure 5C). Compared with the control stimulation, the selective delivery of blue light (5 ms pulse, 20 Hz for 5 min) to ChR2-expressing terminals in the NAc medial shell elicited a significant increase in interaction time with formaldehyde (Figure 5D,E). We also calculated the total distance traveled by mice in the arena and found that locomotion was not affected (Figure 5K).
Next, we selectively inhibited mPFCGlut terminals in the NAc medial shell by delivering continuous yellow light to the LH of male mice bilaterally infected with AAV-CaMKIIα-eNpHR3-mCherry in the mPFC (Figure 5G,H). In the approach-avoidance assay, inhibition of the mPFCGlut →NAc pathway significantly reduced interaction time with formaldehyde without affecting locomotion (Figure 5I–L). We also carried out a real-time place aversion (RTPA) assay and found that inhibition of the mPFCGlut →NAc pathway elicited avoidance of the photoinhibition-paired chamber (Figure S10A–D).
Furthermore, we determined whether the activation of mPFCGLUT neurons could attenuate aversive behaviors following the inhibition of NAcTac1 neurons. AAV-CaMKIIα-hM3D(Gq)-mCherry or AAV-CaMKIIα-mCherry was injected in the mPFC, while AAV-DIO-hM4D(Gi)-mCherry was injected in the NAc (Figure S10A). We found that the activation of the mPFC neurons was able to attenuate avoidance behaviors in response to aversive stimuli (Figure S11B,C). Taken together, these data indicate that the mPFCGlut →NAc circuit is crucial to avoidance behaviors in response to aversive stimuli.
## 3. Discussion
Using neural circuit tracing, chemogenetics, electrophysiology, and optogenetics approaches, we found that NAcTac1 neurons in the medial shell mediate avoidance responses to aversive stimuli in 10–14-week-old male mice. NAcTac1 neurons send inhibitory inputs to the LH, and the NacTac1→LH circuit is required for aversive behaviors in mice. Moreover, Nac neurons receive glutamatergic inputs from mPFCGlut neurons, and mPFCGlut→Nac projection regulates behavioral responses in the presence of aversive stimuli.
Our mapping study of output circuits demonstrated that NacTac1 neurons in the medial shell project to LH neurons. The LH is a brain region that contains heterogeneous cell populations [49] and is involved in the regulation of multiple behaviors, such as feeding, aversion, and reward-seeking [50,51]. Previous studies have reported that the activation of LH neurons causes avoidance and aversive behaviors [52,53,54]. This is consistent with our data, which show that the optogenetic inhibition of NAcTac1 terminals in the LH induced aversive behaviors in mice. However, the LH receives multiple excitatory and inhibitory inputs from both cortical and subcortical structures, further research is needed to fully resolve the neuron populations receiving inhibitory inputs from NAcTac1 neurons and the mechanisms underlying aversion in the LH.
Previous studies have reported that mPFC neurons project to the NAc and that these neurons are able to elicit avoidance [55]. It has also been reported that projections from the mPFC to the NAc have no effect on aversion [37]. These conflicting results may partially be caused by the heterogeneity of the different regions of the NAc: the core, medial shell, and lateral shell. These regions contain similar classes of medial spiny projection neurons (MSNs). NAc medial shell MSNs have been described as “medium-small spiny neurons” with low density [56]. In addition, the NAc medial shell shows a perplexing phenotype that opposes the classical direct and indirect pathway model [16,57,58]. In addition, previous work indicates that D1 neurons in the NAc also represent a portion of the classical indirect pathway and are activated by aversive stimuli [59,60]. Taken together, the classical striatal direct and indirect pathway models are not applied to the NAc. In this study, we found that mPFCGlut neurons send excitatory signals to neurons in the NAc medial shell. Moreover, this mPFCGlut→NAc circuit is involved in the regulation of aversion. However, aversion is a multidimensional construct, and we cannot rule out the possibility that other neurons in the NAc receive inputs from the mPFC and contribute to aversive behaviors.
In summary, we delineated distinct Tac1 neurons as encoding aversive stimuli. Furthermore, we dissected the dedicated function of this circuit and identified it as a critical component of the aversion circuit. These results may improve our understanding of the aversion circuit. By understanding the structure and mechanisms underlying aversion and negative prediction, it will be possible to design intervention strategies for pathological depressive conditions.
## 4.1. Animals
All experimental procedures were approved by the Animal Advisory Committee of Northeast Normal University, China. The laboratory was kept under specific pathogen-free (SPF) conditions. All mice were maintained on a 12–12 h light–dark cycle (lights on from 6:00 to 18:00 every day), with food and water provided ad libitum. All behavioral tests were performed during the light period. C57BL/6J mice were obtained from Huafukang Animal Center, Beijing, China. Tac1-IRES2-Cre mice (Jax No. 021877) were obtained from Jackson Laboratory (USA). Ai9 mice (Jax No. 007905) were kindly provided by Prof. Chunjie Zhao from Southeast University.
## 4.2. Viral Vector Generation
For monosynaptic tracing, AAV-EF1α-DIO-His-EGFP-2a-TVA (AAV$\frac{2}{9}$, 5.53 × 1012 particles mL−1), AAV-EF1α-DIO-RG (AAV$\frac{2}{9}$, 5.22 × 1012 particles mL−1), and RV-ENVA-ΔG-dsRed (3.10 × 108 particles mL−1) were purchased from BrainVTA (Wuhan, China). AAV-EF1α-DIO-mCherry (AAV$\frac{2}{9}$, 1.47 × 1013 particles mL−1) was purchased from GeneChem (Shanghai, China).
For functional analysis, AAV-EF1α-DIO-hM4D(Gi)-mCherry (AAV$\frac{2}{9}$, 1.044 × 1012 particles mL−1), AAV-EF1α-DIO-hM3D(Gq)-mCherry (AAV$\frac{2}{9}$, 2.205 × 1012 particles mL−1), and AAV-EF1α-DIO-ChR2-mCherry (AAV$\frac{2}{9}$, 1.25 × 1013 particles mL−1) were purchased from GeneChem (Shanghai, China). AAV-CaMKIIα-ChR2(H134R)-eYFP-WPRE-hGH polyA (AAV$\frac{2}{9}$, 2.77 × 1012 particles mL−1), AAV-CaMKIIα-eYFP-WPRE-hGH polyA (AAV$\frac{2}{9}$, 6.6 × 1012 particles mL−1), AAV-CaMKIIα-eNpHR3.0-mCherry-WPRE-hGH polyA (AAV$\frac{2}{9}$, 4.25 × 1012 particles mL−1), and AAV-CaMKIIα-mCherry-WPRE-hGH polyA (AAV$\frac{2}{9}$, 2.29 × 1012 particles mL−1) were purchased from BrainVTA (China).
## 4.3. Viral Tracing
For output mapping, the NAc medial shell of Tac1-Cre mice was injected with with AAV-EF1α-DIO-mCherry (200 nL). For input mapping, AAV-EF1α-DIO-His-EGFP-2a-TVA and AAV-EF1α-DIO-RG (1:1, total 150 nL) were injected into the NAc medial shell of Tac1-Cre mice. Four weeks later, 300 nL of RV-ENVA-ΔG-dsRed was injected into the LH. Thus, we only infected NAcTac1 neurons in the medial shell that projected to the LH, and traced their inputs. The mice were sacrificed one week after RV injection.
## 4.4. Stereotaxic Injection
Mice were anesthetized with $1.0\%$ sodium pentobarbital (0.1 g/kg body weight, i.p.). Viruses were delivered at a rate of 100 nL/min using a stereotaxic instrument (RWD Co, Shenzhen China) and a 5 µL syringe (Hamilton, Sigma, USA). After each injection, the syringe was left in place for 15 min, and then, slowly withdrawn. Experiments were performed at least 4–6 weeks after virus injection.
Stereotaxic coordinates were derived from the Paxinos and Franklin Mouse Brain Atlas and empirically adjusted. The coordinates for injection into the NAc medial shell (total volume of 400 nL) were +1.9 mm AP, ±0.6 mm ML, and −4.4 mm DV. The coordinates for injection into the LH (total volume of 150 nL) were −1.5 mm AP, ±0.9 mm ML, and −5.1 mm DV. The coordinates for injection into the mPFC (total volume of 400 nL) were +2.2 mm AP, ±0.3 mm ML, and −1.35 mm DV. For monosynaptic circuit tracing and the ChR2 experiment, viruses were delivered unilaterally. For other functional analysis, viruses were delivered bilaterally.
## 4.5. Implantation of Optical Fibers
Optogenetic behavioral experiments were performed as previously described [16,18,19], and optic fibers (NA: 0.37; INPER, Wuhan, China) were unilaterally (ChR2) or bilaterally (NpHR3.0) implanted over the LH (AP: −1.5 mm; ML: ±0.9 mm; DV: −4.9 mm) and NAc medial shell (AP: +1.9 mm; ML: ±0.5 mm; DV: −4.2 mm). The mice were subjected to behavioral tests after 2 weeks of recovery. For optogenetic activation experiments, both control and ChR2-injected mice were stimulated using a 20 Hz 465 nm blue laser (INPER, China) with 2–5 mW light power at the fiber tips. For optogenetic inhibition experiments, both control and NpHR-injected mice were continuously stimulated using a 589 nm yellow laser (INPER, China) with 2–5 mW light power at the fiber tips.
## 4.6. Immunohistochemistry
As previously described [61], mice were deeply anesthetized with sodium pentobarbital (0.5 g/kg, i.p.) and perfused transcardially with 0.1 M PBS followed by $4\%$ paraformaldehyde (PFA) in PBS. Their brains were then post-fixed overnight at 4 °C and transferred to $30\%$ sucrose solution. Sagittal and coronal sections were cut on a freezing microtome (Leica, CM 1950, USA) at a thickness of 40 µm. The sections were rinsed in PBS, and then, incubated in blocking solution ($0.2\%$ Triton X-100, $10\%$ serum, and $2\%$ BSA in 0.1 M PBS) for 2 h. After washing with PBS, the sections were counterstained with DAPI (1:2000, Life Technologies, D3571, USA) for 8 min. The sections were then covered with ProLong gold mounting media (Thermo Fisher, P36930, USA). The following primary antibodies were used: NeuN (1:1000; EMD Millipore, MAB377, USA), substance P (1:1000; Abcam, ab10353, USA), VGLUT2 (1:500; Synaptic Systems, 135 402, Germany), and DRD1 (1:500; Novus Biologicals, NB110-60017, USA). The following secondary antibodies were used: Alexa Fluor 488-conjugated goat anti-mouse (1:1000; Invitrogen, A21121, USA), Alexa Fluor 488-conjugated goat anti-rabbit (1:1000; Invitrogen, A11008, USA), and Alexa Fluor 488-conjugated goat anti-guinea pig (1:1000; Invitrogen, A11073, USA). All images were acquired using a Zeiss LSM 880 confocal microscope (USA).
## 4.7. Ex Vivo Electrophysiology
Mice were deeply anesthetized with sodium pentobarbital and quickly decapitated to remove their brains. Acute slices (300 μm thick) were cut using a vibrating microtome (Leica, VT 1000S). The sections were quickly transferred to a recovery chamber and incubated at 35 °C for 30 min in recovery solution comprising 93 mM NMDG, 1.2 mM NaH2PO4, 30 mM NaHCO3, 20 mM HEPES, 25 mM D-Glucose, 5 mM Na-ascorbate, 2 mM Thiourea, 3 mM Na-pyruvate, 3 mM KCl, 10 mM MgSO4, 0.5 mM CaCl2, 93 mM HCl, and 12 mM NAC (pH 7.4). The slices were then incubated at room temperature for 1 h in carbogenated artificial cerebral spinal fluid (aCSF) comprising 120 mM NaCl, 2.5 mM KCl, 1.0 mM NaH2PO4, 26 mM NaHCO3, 11 mM D-glucose, 2.0 mM MgCl2, and 2.0 mM CaCl2 (pH 7.4) before recording. Recordings were made at 33 °C (TC-324B; Warner Instruments, USA). All solutions were saturated with $95\%$ O2/$5\%$ CO2.
Whole-cell patch-clamp recordings were performed using an EPC-$\frac{10}{2}$ amplifier (HEKA, Germany). The recording pipettes were pulled from borosilicate glass tubes (Sutter Instruments, USA) and had a resistance of 3–6 MΩ; only whole-cell patches with a series resistance < 15 MΩ were used for recordings. EPSC and IPSC were recorded by holding the membrane potential at −70 mV.
For optical recording in the LH, AAV-DIO-ChR2-mCherry was injected into the NAc medial shell of Tac1-Cre mice, and LH neurons in areas with a high density of mCherry terminals were patched. ChR2 with 465 nm blue light was delivered via a laser (INPER-B1–465, INPER, China). To record optically evoked IPSCs (oIPSCs) in LH neurons, CNQX (50 µM, Tocris Bioscience, 1045, USA) was added to the aCSF. Patch pipettes were filled with 135 mM CsCl, 1 mM EGTA, 4 mM Mg-ATP, 0.6 mM Na-GTP, and 10 mM HEPES (pH 7.4).
For optical recording in the NAc medial shell, AAV-CaMKIIα-ChR2-mCherry was injected into the mPFC of C57 mice, and NAc medial shell neurons in the areas with a high density of mCherry terminals were patched. ChR2 with 465 nm blue light was delivered via a laser (INPER-B1–465, INPER, China). To record optically evoked EPSCs (oEPSCs) in NAc medial shell neurons, bicuculline (20 µM, Tocris Bioscience, 0130) was added to the aCSF. Patch pipettes were filled with 130 mM K-gluconate, 1 mM EGTA, 5 mM Na-phosphocreatine, 2 mM Mg-ATP, 0.3 mM Na-GTP, and 10 mM HEPES (pH 7.4).
Data were acquired using PATCHMASTER 1.3 (HEKA, Germany) and analyzed using MiniAnalysis 1.0 (Synaptosoft), Clampfit 10.0 (Molecular Devices), and Igor 5.03 (Wavemetrics) software.
## 4.8. Behavioral Assays
All mice used for the behavioral assays were male mice and their littermates. An experimenter blinded to the genotypes performed all the tests.
## 4.9. Approach-Avoidance Test
The avoidance test was conducted to measure avoidance of an unfamiliar aversive stimulus. For the chemogenetics experiments, control-, hM3D(Gq)-, and hM4D(Gi)-injected mice were i.p. injected with clozapine N-oxide (CNO; 5 mg/kg or JHU37160; 0.5 mg/kg), and introduced into the chamber half an hour later. The chamber (70 cm × 70 cm) contained three sides (safe, center, and form). A piece of cotton dipped in $5\%$ formaldehyde was placed on the form side. The opposite sides of the chamber were designated the ‘safe’ area and ‘center’ area. As previously described [18], mice tend to explore a novel object, but animals display strong avoidance behaviors when exposed to formaldehyde. Interactions with formaldehyde were recorded and analyzed by the EthoVision XT system (Noldus, Wageningen, The Netherlands).
For the optogenetics (ChR2 and NpHR) experiments, after recovery from the surgery for virus injection and fiber implantation, mice were introduced into an arena (40 cm × 40 cm). A piece of cotton dipped in $5\%$ formaldehyde was place in a corner of the arena. Interactions with formaldehyde were recorded in 5 min segments and analyzed using the EthoVision XT system (Noldus, Wageningen, The Netherlands).
## 4.10. RTPA Assay
On the day of habituation, the mice were introduced into a Plexiglas box with two chambers (30 cm × 30 cm × 50 cm each) and allowed to explore the chamber freely for 15 min. One chamber was randomly designated the stimulation side, and the other was designated the non-stimulation side. The time spent in each of the chambers was recorded. Mice that spent more than $60\%$ of the total time in either compartment were excluded from the experiments. On the day of the experiment, the mice were randomly introduced into either chamber and received continuous 589 nm yellow light (or 20 Hz 465 nm blue light) every time they entered the stimulation chamber until they moved into the non-stimulation chamber. The time spent in each chamber was recorded and analyzed using the EthoVision XT system.
## 4.11. Olfaction Test
Before the test, all pellets were removed from the home cage, but the water bottle was kept in place. On the day of the experiments, a mouse was introduced into a clean cage containing clean bedding with a depth of 3 cm. The animal was allowed to explore the arena freely for 5 min. The animal was then transferred to an empty clean cage. In the cage containing the bedding, food was buried approximately 1 cm beneath the surface in a random corner. The surface of the bedding was smoothed out, and the animal was reintroduced into the cage. The latency to find the buried food was recorded. The food was considered uncovered when the mouse started to eat it.
## 4.12. Quantification and Statistical Analyses
All experimental procedures and data analyses were conducted in a blinded manner. The number of replicates (N or n) indicated in the figure legends refers to the number of experimental subjects independently treated in each experiment. All statistical analyses were performed in Graph Pad Prism (GraphPad Software) unless otherwise stated. For normally distributed data, a Student’s test and one-way ANOVA, followed by Scheffe’s post hoc test, were used to analyze the significance between groups. For non-normally distributed data, Mann–Whitney U-tests were used to calculate the significance between groups. All statistical data can be found in the figure legends. Statistical significance was set at * $p \leq 0.05$, ** $p \leq 0.01$, and *** $p \leq 0.001.$ *The data* are presented as the means ± s.e.m. |
# Prevalence of Diabetic Retinopathy and Use of Common Oral Hypoglycemic Agents Increase the Risk of Diabetic Nephropathy—A Cross-Sectional Study in Patients with Type 2 Diabetes
## Abstract
Objective: This study investigated the effect of amino acid metabolism on the risk of diabetic nephropathy under different conditions of the diabetic retinopathy, and the use of different oral hypoglycemic agents. Methods: This study retrieved 1031 patients with type 2 diabetes from the First Affiliated Hospital of Liaoning Medical University in Jinzhou, which is located in Liaoning Province, China. We conducted a spearman correlation study between diabetic retinopathy and amino acids that have an impact on the prevalence of diabetic nephropathy. Logistic regression was used to analyze the changes of amino acid metabolism in different diabetic retinopathy conditions. Finally, the additive interaction between different drugs and diabetic retinopathy was explored. Results: *It is* showed that the protective effect of some amino acids on the risk of developing diabetic nephropathy is masked in diabetic retinopathy. Additionally, the additive effect of the combination of different drugs on the risk of diabetic nephropathy was greater than that of any one drug alone. Conclusions: We found that diabetic retinopathy patients have a higher risk of developing diabetic nephropathy than the general type 2 diabetes population. Additionally, the use of oral hypoglycemic agents can also increase the risk of diabetic nephropathy.
## 1. Introduction
Diabetes is a metabolic disorder caused by absolute or relative insufficient secretion of insulin, with type 2 diabetes (T2D) being the most common. In 2021, 537 million adults aged 20–79 years had diabetes around the world, and by 2045, the number of diabetic adults is expected to rise to 783 million [1]. T2D is associated with many adverse complications, and its complications are a major cause of death for T2D. Diabetic microangiopathy is a group of common complications of diabetes, in which diabetic retinopathy (DR) and diabetic nephropathy (DN) are the most common. DN is proteinuria and a progressive decrease in glomerular filtration rate (GFR) due to prolonged diabetes. The incidence of DN is also on the rise in China, and it has become the second cause of end-stage renal disease, second only to various glomerulonephritis. As both belong to microvascular disease, the correlation between DR and the risk of DN incidence has become a research topic. According to an UK study, DR occurs earlier than other complications [2]. The presence of DR means not only vision problems, but also an increased risk of other microvascular and macrovascular complications [3]. At present, some studies have found that DR occurs earlier than DN, can promote the development of DN, and can also help diagnose DN [4]. Additionally, DN patients with concurrent DR have an increased risk of rapid renal disease progression and generally worse renal outcomes [5]. Additionally, with the further application of metabolomics in the study of the pathogenesis of T2D and its complications, researchers have found that amino acid metabolism is closely related to microangiopathy. Amino acids, such as leucine (Leu) [6], histidine (His) [7,8], phenylalanine (Phe), and tyrosine (Tyr) [9], have been confirmed to be significantly related to the occurrence and development of DN in past studies. Metabolomics has gradually become an important method system to reveal the risk factors of DN. However, it is still unclear whether the relationship between amino acids and DN exists in people with different DR Status.
Metformin, Acarbose, and Sulfonylureas are all common oral hypoglycemic agents during the treatment of diabetes. At present, the protective effect of Metformin on the kidney has been widely studied. A study found that Metformin has a potential protective effect on DN through the AMPK/SIRT1-FoxO1 pathway [10], while other scholars thought there is no relation between them. However, less research has been done on the effects on DN risk of the other two drugs at the same time. Whether there are interactions between different drugs remains unclear. Thus, it is time to identify risk factors and early prediction of diabetes and its complications, which is critical for reducing diabetes complications [11,12] and economic burden [13]. Additionally, this is beneficial from both clinical and public health perspective [14]. It is against this background that we conducted the research on the effect of DR and the use of drugs on the risk of DN.
## 2.1. Study Method and Population
All the information of T2D patients is retrieved from the First Affiliated Hospital of Liaoning Medical University (FAHLMU), which is a tertiary general hospital located in Jinzhou, Liaoning Province, China. Inclusion criteria for this study were: [1] Patients diagnosed as T2D or treated with anti-hyperglycemic therapy; [2] The information of diabetic microvascular disease including DN and DR is completed. [ 3] The information on the use of Metformin, Acarbose, and Sulfonylureas drugs is completed. Exclusion criteria were: [1] T2D patients under the age of 18; [2] Subjects lacking amino acid indicators, height, weight, or blood pressure. [ 3] Patients with extreme outliers of amino acids. A total of 1821 patients with T2D were preliminarily included in this study. According to the exclusion criteria, 1031 subjects were finally included in this study, including 188 patients in the DN group and 843 T2D patients in the control group (Figure 1). Then, we used multiple imputation to deal with the missing data.
The ethics of the study was approved by the Ethics Committee for Clinical Research of FAHLMU. Additionally, due to the retrospective nature of the study, informed consent was waivered, which is consistent with the Declaration of Helsinki.
## 2.2. Data Collection and Clinical Definitions
We retrieved the data from electronic medical records which included demographic and anthropometric information, and current clinical factors and information of diabetic complications. Demographic data included gender and age. Anthropometric measurements included weight, height, systolic blood pressure (SBP), and diastolic blood pressure (DBP). Clinical parameters included total cholesterol (TC), triglycerides (TG), glycosylated hemoglobin (HbA1c), low density lipoprotein cholesterol (LDL-C), high density lipoprotein cholesterol (HDL-C), creatinine (Crea), and uric acid (UA). Additionally, the duration of DN was recorded to exclude the interference of the duration of the disease on the results.
The measurements of anthropometric indicators were measured by standardized procedures in the hospital. Participants were allowed to wear light clothes and no shoes. weight and height were measured to the nearest 0.1 kg and 0.5 cm, respectively. Blood pressure in adults was measured with a standard mercury sphygmomanometer after a cuff on the right arm and after rest of 10 min in a seated position at an appropriate size. Age was calculated in years from the date of birth to the date of medical examination or hospitalization. The body mass index (BMI) was obtained according to the formula as the ratio of body weight (kg) to squared height (m) and classified according to the overweight and obesity criteria recommended by the National Health Commission of China [15]. The diagnosis and classification criteria of T2D in the study were based on the publishment of World Health Organization (WHO) or treated with antihyperglycemic therapy [16]. DR diagnostic standard based on eye exam results for T2D [17]. The diagnostic standard for DN was based on the criteria of care for T2D [18]. According to the RCS curve, His, tryptophan (Trp), Valine (Val), and threonine (Thr) were stratified according to 51 μmol/L, 46 μmol/L, 133 μmol/L, and 24 μmol/L, respectively (Figure 2).
## 2.3. Amino Acid Quantification and Equipment
Details of the metabolomics assessment method have already been published [19]. Briefly, 8 h of fasting blood sample was collected at admission. A total of 22 amino acids were detected via LC-MS, i.e., asparagine (Asn), alanine (Ala), arginine (Arg), citrulline (Cit), Leu, lysine (Lys), Trp, Tyr, Thr, Val, glycine (Gly), proline (Pro), Phe, glutamine (Gln), His, methionine (Met), serine (Ser), ornithine (Orn), glutamate (Glu), aspartate (Asp), piperamide (Pip), and cysteine (Cys). AB Sciex 4000 QTrap system (AB Sciex, Framingham, MA, USA) was used to conduct direct injection MS metabolomic analysis. Analyst v1.6.0 software (AB Sciex) was used for data collection. ChemoView 2.0.2 (AB Sciex) was used for data preprocessing. Isotope-labeled internal standard samples were purchased from Cambridge Isotope Laboratories (Tewksbury, MA, USA). Standard samples of the amino acids were purchased from Chrom Systems (Grafelfing, Germany).
## 2.4. Statistical Analysis
Continuous data were expressed as mean ± standard deviation (SD), non-normally distributed data were expressed as median (interquartile range), and categorical variables were expressed as numbers (percentages). In two populations with different DR prevalence status, it was tested whether there were differences between the different indicators of the patients in the DN group and the non-DN group. Continuous variables were normally distributed with t-test or ANOVA, non-normal with rank-sum test, and categorical variables with chi-square test.
Characteristics of participants were described and compared according to the prevalence of DN. According to the results, amino acids with significant differences in DN prevalence were screened out for further analysis. Spearman correlation was performed on the correlation between amino acids and DR. A binary logistic regression model stratified by DR condition was then used to obtain odds ratios (OR) for different amino acids to DN and their $95\%$ confidence intervals ($95\%$CI). Traditional risk factors for DN in T2D patients were adjusted by a structural adjustment program: model adjusted age, gender, BMI, SBP, DBP, TG, TC, HbA1c, HDL-C, LDL-C, Duration of DN, UA, and Crea. Then, the additive interaction between different drugs and DR was analyzed and the correlation coefficient of additive interaction was calculated. All analyses were performed using R version 4.1.0.
## 3.1. Description of Study Subjects
Table 1 summarizes the select characteristics of the DN group and the non-DN group stratified by DR condition in the total population. The study included 1031 participants, with a mean age of 57.24 years old (SD: 13.82) and a mean BMI of 25.29 (SD: 3.85). Additionally, 548 patients were male ($53.15\%$).
Then, we divided the total population into two groups based on prevalence of DR. In the DR group, the mean age was 57.77 years old (SD: 9.96) and the mean BMI was 25.09 (SD: 3.31). Additionally, there were 73 males ($45.1\%$) in this group. When compared according to the outcome of DN, we found that differences in TC, UA, and use of three oral antidiabetic drugs were statistically significant between the two groups. Patients with DN had higher TC and UA, and a greater proportion of using all three common drugs. In the T2D population without DR, the mean age was 57.14 years old (SD: 14.43) and the mean BMI was 25.33 (SD: 3.95). Additionally, there were 475 males without DR ($54.7\%$). In this group, age, BMI, SBP, HDL-C, UA, Crea, and the use of Acarbose was statistically different between the second level groups divided by DN. The patients in the DN group were older, and have higher BMI, SBP, HDL-C, UA, and Crea. A larger percentage of people with DN use Acarbose.
## 3.2. Differences in Individual Amino Acids According to the Appearance of DN
It is observed that the 10 amino acids of Leu, Phe, Trp, Tyr, His, Val, Gly, Thr, Cit, and Ser had significant differences in T2D patients with different DN prevalence (Table 2). Except for Cit, the concentrations of the other 9 amino acids in DN patients were lower than those in T2D patients.
## 3.3. Correlations between Amino Acids and DR and the Impacts of DR on Amino Acids for DN
We did correlation analysis on the selected 10 amino acids and DR (Figure 3). The results showed that except Cit, the remaining amino acids were positively correlated with DR, and the correlations were all statistically significant. Among amino acids, the correlation between Leu and Val was the strongest, reaching a strong correlation level ($r = 0.84$). Due to the significant correlation between amino acids and DR, we next analyzed the relationship between amino acids and DN risk stratified by the prevalence of DR (Table 3). The results showed that the protective effect of His, Trp, Val, and Thr on the risk of DN was no longer significant in the DR group.
## 3.4. Addictive Interaction between Oral Hypoglycemic Drugs and DR
Table 4 shows the additive interactions between different drugs and DR. Concomitant use of Acarbose and Metformin increased the risk of DN (OR: 1.61, $95\%$CI: 1.13–2.29), and, although either Acarbose or Metformin alone increased the risk of DN, the risk of concomitant use of both drugs was higher than that of single use any one. Similar results can be observed in the additive interaction results of Acarbose and Sulfonylureas. In the interaction analysis of Sulfonylureas, Metformin, and DR, the highest risk of DN was using Sulfonylureas only in the presence of DR (OR: 2.95, $95\%$CI: 1.5–5.81). Followed by DR and the use of both Sulfonylureas and Metformin (OR: 2.56, $95\%$CI: 1.56–4.21).
## 4. Sensitive Analysis
After random forest imputation of missing values (UA = 187, TG = 288, TC = 289, Crea = 147), the effects of special amino acids for the risk of DN stratified by DR in T2D remained stable and significant in multi-variable analyses (Table 5).
## 5. Discussion
In recent years, many studies have found that the risk of DN in DR patients is higher than that in the general T2D population. A study has found a significant association between DR and subsequent increased risk of DN in T2D patients, with younger patients at greater risk than older patients [20]. A study in Sudanese shows a significant association between DR and DN in adults with diabetes [21]. Additionally, there is a study which found that DR contributes to the diagnosis of DN in patients with T2D and kidney disease, but its severity may not be parallel to the presence of DN [4]. Klein et al. and Kofoed-Enevoldsen et al. suggested that the occurrence of DN and DR may be regulated by similar molecular pathways, which means that patients with DN may have already developed DR and patients with DR are vulnerable to develop DN. However, results in T2D have been inconsistent [22,23]. Excepting for population heterogeneity, the duration of T2D in different study populations is also an important influencing factor for the inconsistent results. At the same time, because DN can cause proteinuria, which shows significant renal lesions in patients, the inclusion of some patients with glomerulopathy who have the same results but do not have DN will also affect the results [24].
Our results showed that the protective effects of His, Trp, Val, and Thr on the risk of DN were affected by the prevalence of DR. A metabolomic study of DR found Asn, dimethylamine, His, Thr, and Gln to be the most variable metabolites in DR patients [25]. Another study confirmed that the metabolism of Gly, Ser Trp, and *Thr is* significantly disturbed in DR patients, especially the Trp metabolism [26]. Additionally, it is found that valine–leucine–isoleucine biosynthesis was also significantly disturbed in DR patients [27].
Some published studies have shown that Metformin can protect the kidney. However, the result obtained in our study shows that Metformin increases the risk of DN. We believe that excepting for population heterogeneity, the reason for the different results is that the use of Metformin leads to changes in the metabolism of other nutrients. Multiple studies have shown that long-term use of Metformin reduces Vitamin B12 (VB12) levels in the body [28,29,30]. A study in a North Indian population found that VB12 supplementation prevented the development of DN and improved the overall management of people with diabetes [31]. In animal experiments, it was found that both folic acid and VB12 can reduce the 24 h urinary albumin, and the combined effect is better [32]. High concentrations of homocysteine (Hcy) have been identified as a risk factor for DN [33,34]. Vitamin B supplementation can effectively reduce Hcy levels, of which VB12 is more effective in reducing Hcy concentration [35].
The study found that, although Acarbose can significantly improve blood sugar levels in DN patients, proteinuria did not improve [36]. Additionally, in mice experiments, Acarbose was found that it does not significantly reduce the incidence and severity of glomerulosclerosis [37]. This is consistent with our findings. Using Acarbose alone had no significant effect on the development of DN. However, the results showed that the effect of Metformin on DN was amplified when Metformin and Acarbose were used together. Additionally, our study showed that the effect of Sulfonylureas on DN was not significant when used alone. In published studies, some scholars have pointed out that gliquidone can improve the antioxidant response and delaying renal interstitial fibrosis by inhibiting the Notch/Snail1 signaling pathway, thereby improving the symptoms of DN. Gliquidone, as one Sulfonylureas, can ameliorate the diabetic symptoms of DN through inhibiting Notch / Snail1 signaling pathway, improving anti -oxidative response, and delaying renal interstitial fibrosis [38]. On the other hand, studies have also found that Glibenclamide (another kind of Sulfonylureas) should be used cautiously in patients with stages 2 and 3 DN. Additionally, Sulfonylureas are contraindicated in patients with stage 4 DN [39]. We believe that the difference may be caused by the different duration and stage of the DN in the study.
Based on the current research on microvascular disease, our study further explored the effect of amino acid metabolism changes on the risk of DN in different DR conditions. At present, there are few published studies on the relationship between amino acid metabolism and DN. The amino acids selected from our results can provide certain directions and ideas for further refinement of DN metabolism research. At the same time, our research has some shortcomings. [ 1] Due to the nature of the cross-sectional study, we could not prove the causality between DR and progression of DN [40,41], and the order of occurrence between DR and DN cannot be determined. At present, there are also some studies that believe the occurrence of DN promotes the development of DR. For example, a study conducted in Pakistan indicated that DN is an independent risk factor for the development and progression of DR [42]. However, unlike DN, the DR is not only a kind of microvascular disease, but can also cause a certain degree of nerve damage. Due to its more complex pathogenic factors and more related lesions, it is believed that its onset time has been studied more extensively before DN. More prospective studies are needed to prove the causal relationship between them; [2] The lack of available vitamin indicators makes it difficult to verify the effects of drugs on vitamins. [ 3] Other than the three drugs mentioned in the article, the effects of other oral hypoglycemic agents were not included in the study. Our laboratory team is conducting monitoring of vitamin indicators and improving the information of more types of hypoglycemic drugs to subsequently validate our results.
In conclusion, we selected amino acids that have protective effect on the risk of DN, and found that DR patients had a higher risk of developing DN. Additionally, the use of oral hypoglycemic drugs can also increase the risk of DN, with combining use of drugs has worse effect than that of any one drug alone. The occurrence of the disease is a complex process with multiple effects, and more follow-up studies are needed to confirm the association between different risk factors, so as to better intervene in the disease. |
# The Influence of the COVID-19 Pandemic Emergency on Alcohol Use: A Focus on a Cohort of Sicilian Workers
## Abstract
The period between the beginning and the end of the COVID-19 pandemic emergency generated a general state of stress, affecting both the mental state and physical well-being of the general population. Stress is the body’s reaction to events or stimuli perceived as potentially harmful or distressing. Particularly when prolonged over time, it can promote the consumption of different psychotropic substances such as alcohol, and thus the genesis of various pathologies. Therefore, our research aimed to evaluate the differences in alcohol consumption in a cohort of 640 video workers who carried out activities in smart working, subjects particularly exposed to stressful situations due to the stringent rules of protection and prevention implemented during the pandemic. Furthermore, based on the results obtained from the administration of the AUDIT-C, we wanted to analyse the different modes of alcohol consumption (low, moderate, high, severe) to understand whether there is a difference in the amount of alcohol consumed that could predispose individuals to health problems. To this end, we administered the AUDIT-C questionnaire in two periods (T0 and T1), coinciding with annual occupational health specialist visits. The results of the present research showed an increase in the number of subjects consuming alcohol ($$p \leq 0.0005$$) and in their AUDIT-C scores ($p \leq 0.0001$) over the period considered. A significant decrease in subgroups who drink in a low-risk ($$p \leq 0.0049$$) mode and an increase in those with high ($$p \leq 0.00012$$) and severe risk ($$p \leq 0.0002$$) were also detected. In addition, comparing the male and female populations, it emerged that males have drinking patterns that lead to a higher ($$p \leq 0.0067$$) health risk of experiencing alcohol-related diseases than female drinking patterns. Although this study provides further evidence of the negative impact of the stress generated by the pandemic emergency on alcohol consumption, the influence of many other factors cannot be ruled out. Further research is needed to better understand the relationship between the pandemic and alcohol consumption, including the underlying factors and mechanisms driving changes in drinking behaviour, as well as potential interventions and support strategies to address alcohol-related harm during and after the pandemic.
## 1. Introduction
11 March 2020 has now become a historic date. On that day, the World Health Organisation (WHO), following a careful analysis of the risks associated with the spread of severe acute respiratory syndrome by a coronavirus (SARS-CoV-2), declared that the COVID-19 epidemic could be considered a real pandemic [1,2,3,4]. Since then, the world’s population has had to change its lifestyle, aligning with the rules laid down by the various governments [e.g., Italian, British, French, American] [5,6,7,8] concerning the prevention and protection methods to be implemented in private life, in public places, in school and university environments and in the workplace [9,10,11,12,13,14,15]. Moreover, this health emergency has forced public and private administrations to resort to smart-working, or agile working, as a suitable method to manage and contain the pandemic.
These changes have had a profound impact on working and social life. All this, together with the continuous evolution of rules to be followed, has led to the genesis of a condition of general consistent malaise that has facilitated the beginning of various forms of stress and related disorders [16,17,18,19,20,21,22], including the one now identified as ‘COVID-19 stress’ [23].
Stress is a generic term often used to indicate adverse life conditions [24]. Exposure to a stressful stimulus over a long period can promote the onset of different moods such as anxiety, fear, anger, excitement, and sadness that can, in the case that they exceed the individual’s coping abilities, promote the occurrence of different pathologies [25,26,27,28,29] and increase vulnerability to use of substances of abuse [30,31,32].
Furthermore, continued exposure to aversive stimuli is influenced by different contexts, such as education (school, university) and work [33,34,35,36]. It has been pointed out that the working environment, its organisation, and work-related behaviour are themselves stressors, and as such can influence workers’ psychological well-being [37]. Recently, different research has focused on the relationship between stress at work, aggravated by the new prevention and protection guidelines due to the pandemic emergency, and the development of mental disorders and risk behaviours such as the use of substances of abuse [37,38]. In this context, the risk of developing such conditions is related to the type of work performed, the potential for social interaction (prolonged or not), and exposure to different environmental contaminants that would promote the genesis of other pathologies.
Notably, among the addictive behaviours related to stressful conditions, alcohol abuse leads the way due to alcohol’s easy obtainability and organoleptic properties [39,40]. In this context, additional scientific evidence shows that people who experience periods of severe economic or psychological stress are more inclined to consume alcoholic beverages with the consequent onset of abuse and addiction behaviour [41,42]. The pandemic has led to changes in alcohol consumption patterns, with some individuals drinking more due to increased stress and isolation. In contrast, others have reduced or abstained due to health or financial concerns. Interesting research by Sohi and colleagues has shown that during the pandemic, the amount and mode of alcohol intake are substantially heterogeneous and depend on the country in which the research was conducted. These authors suggest that further research is needed to understand better the relationship between the pandemic and alcohol consumption, including the underlying factors and mechanisms driving changes in drinking behaviour, and to create potential interventions and support strategies to address alcohol-related harm during and after the pandemic [43].
Based on the aforementioned, this research aimed to assess how both the approach and the mode of consumption of alcoholic beverages changed during the pandemic period in a population of video workers who were forced by the pandemic to carry out activities in smart working. Before administering the AUDIT-C questionnaires, we excluded part of the population based on different criteria. In particular, we decided to exclude subjects with a body mass index above or equal to or greater than 32, with dysmetabolic and oncological pathologies. This decision stems from knowledge of these variables’ influence on alcohol consumption. On the other hand, it has been reported that individuals with a high BMI, particularly those with obesity, are at increased risk of developing dysmetabolic pathologies such as diabetes, metabolic syndrome, and non-alcoholic fatty liver disease, which in turn can increase the risk of developing alcohol addiction by altering the body’s response to alcohol and affecting brain’s reward pathways [44,45]. Moreover, some cancer treatments such as chemotherapy can be less effective in individuals who consume alcohol. Therefore, it is probable that individuals with an oncological pathology will limit or avoid alcohol consumption to reduce the risk of cancer progression and other health complications [46].
Finally, based on the data collected through the administration of the AUDIT-C test, it was possible to classify the population into different categories that accounted for the risk of encountering pathologies related to improper consumption of alcohol.
Given the scientific evidence on the increasing consumption of alcoholic beverages, the hypothesis of our study focuses on the idea that the pandemic period, marked by stringent norms of prevention and protection, was a risk factor that could exert such pressure as to influence the mode of alcohol consumption as much as the amount of alcohol consumed.
## 2.1. Experimental Design
This observational study was conducted on a cohort of video workers, considering the recommendations indicated by the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) [47]. The sample of this study is “opportunistic” because data were collected based on the availability of participants at a private practice of occupational medicine in Palermo, Italy.
The study was conducted in two different periods: T0: June 2020 and T1: April 2022, the date on which, given the end of the emergency state (31 March 2022), apart from vulnerable subjects, the majority of the working population was considered to have returned to work in person.
For this study, subjects of both sexes aged between 25 and 65 years with a video work history of at least four years were enrolled. The population was initially 800 (T0) workers (400 males and 400 females). Among these, $11\%$ (63 F and 25 M) refused to participate in the study, and $6\%$ (11 F and 37 M) were excluded from the study because they did not show up for the specialist visit as they were no longer employed by their companies or were absent due to illness or other causes. A further $3\%$ (9 F and 15 M) of subjects who, at the time of the first medical examination, were on drug therapy for anxiety disorders, depression, or other psychiatric disorders, were also excluded; workers with a body mass index greater than or equal to 32, employees on drug therapy for dysmetabolic pathologies, all workers with previous or current oncological pathologies, and workers with a previous history of pathological addictions were also excluded. All subjects admitted to the study met the inclusion criteria considered in our study.
Eventually, the study enrolled 640 adults: 321 males and 319 females (M/F ratio 1.006). The number of subjects analysed at time T0 was identical to that of T1, although their number varied within the subgroups considered in this study.
At the end of the patients’ general anamnesis, all participants were asked to fill in a questionnaire to establish their alcohol consumption patterns and degree of alcohol dependence.
All participants were informed about the purpose of the study and signed the informed consent before participating. Respondents were asked not to mention their or the organisation’s names in the questionnaire to ensure privacy and anonymity.
All data have been handled according to Italian law to protect privacy (Decree No. 196, January 2003). A multidisciplinary team of health experts collected and analysed the data through the questionnaires administered on alcohol habits.
## 2.2. Assessment of Alcohol Consumption and Degree of Dependence
The assessment of alcohol consumption and the relative risk associated with its use was conducted by administering the Alcohol Use Disorders Identification Test-Concise (AUDIT-C), a modified version of the 10-question Alcohol Use Disorders Identification Test (AUDIT) developed by the World Health Organisation. This test is valuable for investigating alcohol consumption and how it occurs. It also allows us to identify patients who are hazardous drinkers and those who are particularly at risk of developing alcohol-related disorders.
This instrument is a 3-item survey with a total score ranging from 0 to 12 points. Each item has five response options ranging from 0 to 4 points. A score of 3 or more points on the AUDIT-C may indicate that people are risk drinkers or have alcohol use disorders. A score of 4 or more for men and 3 for women is predictive of potential alcohol abuse. A person’s likelihood of developing an alcohol use disorder is directly proportional to a higher test score [48].
Furthermore, based on the score obtained from the AUDIT-C test, we divided our population into five different categories: abstainer (score = 0), low risk (score = 1–3 men; 1–2 female), moderate risk (score = 4 men; 3–4 female); high risk (score = 5–7 men and female) and severe risk (8–12). The groups were structured based on previous research on the association between alcohol intake and health risks [49,50,51,52].
## 2.3. Statistical Analysis
The statistical analysis of the data was conducted using the GraphPadPrism 8.01 statistical software package (GraphPad Company, San Diego, CA, USA). Initially, the collected data were analysed to understand whether they were normally distributed and, consequently, to choose the most suitable statistical analysis to apply. To do this, we applied the D’Agostino–*Pearson omnibus* normality. Given that our data did not follow a normal distribution, we used the non-parametric Chi-square test to determine whether the frequency values obtained with the survey were significantly different from those obtained with the theoretical distribution. Specifically, the Chi-square test was applied to understand whether there were differences in the number of total consumers and between the male and female samples in the two periods, and to assess possible variations in the risk categories obtained from the analysis of the AUDIT-C test data over the time interval considered. Moreover, logistic regression was also performed to calculate the probability of the association between alcohol consumption and gender. Data are expressed as odds ratio (OR).
The Wilcoxon test was applied for paired data, and the Mann–Whitney U test was used for unpaired data to assess the differences in AUDIT-C scores among the population under our study. A descriptive analysis of the data obtained was also conducted to understand the consumption pattern and the amount of alcohol consumption. Data were reported as mean with $95\%$ CI. Statistical significance was set at $p \leq 0.05.$
## 3.1. Alcohol Consumption in the General Population
The collection and analysis of data useful for the identification of the number of subjects consuming alcohol and their risk of developing problems related to the misuse of the substance were conducted by the administration of the AUDIT-C.
In detail, within the sample analysed, a more significant number of subjects who consumed alcoholic beverages, both at T0, 467 ($72.97\%$; audit score (AS) 3.229, confidence interval (CI) 3.072–3.386) and at T1, 519 ($81.09\%$; AS 3.925, CI 3.746–4.104) compared to those who claim not to drink at both T0, 173 ($27.97\%$) and T1, 121 ($18.91\%$) was highlighted. Moreover, among the subjects consuming alcoholic beverages, there were differences regarding the percentage of subjects who consume alcohol with different risk modes. Indeed, the subjects who consume alcohol in a manner considered to be a low risk both at T0, 298 ($63.81\%$; AS 2.168, CI 2.069–2.258) and at T1, 245 ($47.21\%$; AS 2.139, CI 2.032–2.245) prevail over those who consume it in a riskier manner (Table 1).
When we analysed the consumption of alcoholic beverages in a subgroup of drinkers, the descriptive analysis of the data showed that there was a reduction in the percentage of the number of low-risk subjects and an increase in those at moderate, high and severe risk between the time intervals analysed (Figure 1).
Considering the data obtained from the descriptive analysis, we assessed whether there were differences in the number of consumers and those belonging to the different risk categories in the two periods considered. In detail, statistical analysis by the Chi-square test showed a significant increase in the percentage of total consumers (χ2 = 11.94, $z = 3.455$, $$p \leq 0.0005$$). The analysis of the data on the number of subjects consuming alcohol in different risk modes revealed a reduction in the percentage of subjects consuming alcohol in a low-risk manner (χ2 = 7.915, $z = 2.813$, $$p \leq 0.0049$$) and an increase in the high (χ2 = 10.54, $z = 3.247$, $$p \leq 0.0012$$) and severe (χ2 = 13.92, $z = 3.731$, $$p \leq 0.0002$$) risk groups at T1 compared to T0. There were no significant differences in the percentage of moderate-risk drinkers between T1 and T0 (χ2 = 0.8292, $z = 0.9106$, $$p \leq 0.3625$$) (Figure 2).
## 3.2. Differences in Alcohol Consumption between Males and Females
Given the data obtained on drinking behaviour in the sample analysed, we wondered whether there were differences between the percentages of male and female subjects regarding alcohol consumption and differences in the risk related to alcohol consumption (Table 2).
The analysis conducted by applying the Chi-square test did not reveal any significant differences in the percentages of alcohol drinkers between males and females (χ2 = 1.230, $z = 1.109$, $$p \leq 0.2675$$; χ2 = 1.150, $z = 1.072$, $$p \leq 0.2836$$) in the two timeframes considered.
When we evaluated the differences in the consumption of alcoholic beverages obtained from the analysis of the AUDIT-C test, the analysis of the data by the Chi-square did not reveal statistically significant differences both at T0 and at T1 between males and females regarding low risk (χ2 = 0.05162, $z = 0.2272$, $$p \leq 0.8203$$; χ2 = 0.7793, $z = 0.8828$, $$p \leq 0.3774$$), moderate (χ2 = 0.3778, $z = 0.6146$, $$p \leq 0.5388$$; χ2 = 0.06674, $z = 0.2583$, $$p \leq 0.7961$$) and high (χ2 = 0.06298, $z = 0.2509$, $$p \leq 0.8019$$; χ2 = 0.1887, $z = 0.4344$, $$p \leq 0.6640$$). When we went to analyse the data concerning the drinking mode, the data analysis showed that at T0, there were no differences (χ2 = 2.823, $z = 1.680$, $$p \leq 0.0929$$) between males and females. On the contrary, at time T1, we found statistically significant differences in the percentage of males drinking in a manner that exposes them to a severe health risk (χ2 = 7.350, $z = 2.711$, $$p \leq 0.0067$$) compared to females (Figure 3).
Furthermore, based on the data obtained, we calculated the probability of the association between alcohol consumption and gender in the two time periods considered. Specifically, there was no more of a significant probability of drinking in male subjects than in female subjects in the two times considered (OR: 0.9863—$95\%$ CI: 0.9148–1.063; OR: 1.042—$95\%$ CI: 0.9763–1.112).
We also analyzed the differences in the AUDIT-C score between T0 and T1. Statistical analysis was conducted using the Wilcoxon test to understand any differences in the AUDIT-C score in the two times covered by our study. The analysis showed a significant increase in the score at time T1 ($p \leq 0.0001$) compared to that obtained at time T0.
## 4. Discussion
The pandemic emergency experienced in recent years has drastically changed many aspects of daily life. The two waves of the contagion, which occurred over a relatively short period of time, have led to isolation, forced living in confined spaces and profound changes in everyone’s working life [15,53]. All of these things have exerted intense pressure on the adaptive capacities of the population; while in the first phase, these capacities served to cope with adversity by drawing on our instinctive spirit of survival, with the prolongation of the pandemic, they have fostered the development of a condition of persistent stress which may alter an organism’s internal homeostasis and lead to the onset of different pathologies and/or the establishment of addictive behaviour. This may include an increase in alcohol consumption [21,54,55].
The trend recorded for the consumption of alcoholic beverages is well in line with the data obtained from the present observational study, in which there was an increase in the number of alcoholic drinkers (+$10.02\%$) over the time interval examined. It is also interesting to note that the percentage of subjects who consume alcohol is always higher ($72.97\%$; 81.09) than those who claim not to drink ($27.03\%$; $18.91\%$).
The result concerning alcohol consumption behaviour is a sobering thought. In particular, the data showed a different pattern of alcohol consumption. Specifically, following analysis of the subgroup categories, a reduction in the number of subjects who consume alcohol in a manner that exposes them to a low health risk emerged both in males (−$32.14\%$) and females (−$12.78\%$) over the time interval considered. In addition, a significant increase in the number of subjects who consume alcohol in a manner that exposes them to higher ($52.51\%$) and severe ($80.65\%$) health risks were revealed.
In addition to the increase in the consumption of alcoholic beverages, our data also showed an increase in the AUDIT-C score, both when we evaluated all the population subjects of our study ($p \leq 0.0001$) and when we analysed the subgroup of drinkers ($p \leq 00001$). An increase in the AUDIT-C score can predict the development of physical or social problems related to alcohol consumption. In particular, there are risks associated with alcohol consumption that may vary depending on gender, age, general health and the amount and frequency of alcohol consumption. Excessive alcohol consumption can have serious adverse health consequences and increase the risk of liver disease, pancreatitis and certain types of cancer. It can also lead to different mental health problems such as depression and anxiety [56].
Increased consumption of alcoholic beverages and changes in the mode of consumption can be traced back to the emotional distress experienced during the COVID-19 pandemic. During this period, a large part of the population had to drastically change their daily routines, starting with their mode of work.
Remote working has, for example, encouraged social isolation and the onset of general malaise due to the impossibility of setting up a good workplace and/or reconciling work and private commitments effectively [57,58,59].
The office is a space, but it is, above all, a community. Working in the office means being surrounded by workers, collaborating, asking for help, chatting over a coffee, and having pure and simple human contact that makes us feel part of a group. Working from home means giving up completely the social and human component of office work.
Working from home and limiting opportunities for sociability and collaboration can only lead to a growing sense of isolation and increased health risks. This analysis may seem overly alarmist and pessimistic, but it is confirmed by various studies [59,60,61,62,63]. It has been shown that homeworking aligns with workers’ satisfaction only if it is not protracted for a long time. In fact, after an initial period of enthusiasm, there is a widespread desire to return to office life, even in the face of losing time and money for travel [64]. The reason for this choice is mainly the feeling of loneliness that affects home workers [65].
In-person working also underwent profound changes due to the implementation of multiple measures to contain the contagion [66,67]. All of the above were able to generate a solid stimulus to interrupt the normal internal balance of the body and make the condition experienced highly stressful.
This condition may partly explain the increase in the consumption of alcoholic beverages in a manner that exposes health risks, as was recorded in our study.
Stress is a factor closely correlated with often uncontrolled consumption of alcohol and with relapses back into its use after a period of abstinence [68]. Different studies have shown that particularly dangerous and demanding work environments and family stress are factors associated with increased alcohol consumption [69,70,71,72]. This is partly attributable to increased cortisol release which is triggered by activation of the hypothalamic-pituitary–adrenal axis, one of the main modulators of the adaptive stress response [73]. In particular, impaired regulation of the HPA axis is associated with problematic alcohol consumption, and the nature of this dysregulation varies with the stages of progression towards alcohol dependence [74,75,76].
The motivation that drives people to consume more and more alcohol can be traced back to the molecule’s action. In fact, alcohol exerts anxiolytic effects, and its intake promotes a reduction in the perception of stress [77,78]. Alcohol can modulate the activation of the hypothalamic–pituitary–adrenal (HPA) axis both directly and indirectly, resulting in a different regulation of glucocorticoid release and the consequent alteration of the adaptive stress response [79,80,81].
This reduction in the state of tension facilitated by alcohol intake is attributable to its ability to stimulate the action of different inhibitory neurotransmitters, such as γ-aminobutyric acid [GABA] and opioids. These, through inhibition of the hypothalamus’s paraventricular nucleus (PVN), modulate the release of neuropeptides that are helpful in stimulating the synthesis and subsequent release of cortisol [72,82,83,84], thereby attenuating the stress response.
Alcohol can thus assume a positive reputation among the general population, who may use it as ‘self-medication’ to combat incredibly unpleasant living conditions and sources of stress. This encourages a growing amount of alcohol to be consumed, thereby promoting an increased risk of alcohol-related diseases.
## 5. Limitations of the Study
Although this research provides further evidence of the influence of stress on alcoholic beverage consumption, it does not lack some limitations that could be considered for future studies. In particular, the study, although carried out on a reasonably homogeneous population, could not consider the correlation between the stress biomarkers assessed at the times considered and alcohol consumption patterns. This would have provided intriguing evidence of the risk of alcohol-related disease in the population examined.
## 6. Conclusions
Our study highlighted the way in which the imposition of smart working during the pandemic was one of the factors that negatively impacted the psycho-physical wellbeing of workers by causing stress that encourages the onset of risky behaviour.
In the population examined, it emerged that during the COVID-19 pandemic, the number of alcohol users and the modes of consumption of alcoholic beverages changed. From our study, the increase in alcohol consumption in ways that increase health risk is a result to be treated with particular concern, and to which we should pay particular attention.
This result was related to difficult working conditions, which are a source of intense stress. In-depth knowledge of the risky ways in which an individual worker consumes alcohol can enable the implementation of preventative actions to safeguard their health and to improve the safety of the worker and those who work with them.
Further studies are necessary to determine the close correlations between work-related stress and risky alcohol consumption in individual video workers, especially after the COVID-19 pandemic. |
# A Dual Coverage Monitoring of the Bile Acids Profile in the Liver–Gut Axis throughout the Whole Inflammation-Cancer Transformation Progressive: Reveal Hepatocellular Carcinoma Pathogenesis
## Abstract
Hepatocellular carcinoma (HCC) is the terminal phase of multiple chronic liver diseases, and evidence supports chronic uncontrollable inflammation being one of the potential mechanisms leading to HCC formation. The dysregulation of bile acid homeostasis in the enterohepatic circulation has become a hot research issue concerning revealing the pathogenesis of the inflammatory-cancerous transformation process. We reproduced the development of HCC through an N-nitrosodiethylamine (DEN)-induced rat model in 20 weeks. We achieved the monitoring of the bile acid profile in the plasma, liver, and intestine during the evolution of “hepatitis-cirrhosis-HCC” by using an ultra-performance liquid chromatography-tandem mass spectrometer for absolute quantification of bile acids. We observed differences in the level of primary and secondary bile acids both in plasma, liver, and intestine when compared to controls, particularly a sustained reduction of intestine taurine-conjugated bile acid level. Moreover, we identified chenodeoxycholic acid, lithocholic acid, ursodeoxycholic acid, and glycolithocholic acid in plasma as biomarkers for early diagnosis of HCC. We also identified bile acid-CoA:amino acid N-acyltransferase (BAAT) by gene set enrichment analysis, which dominates the final step in the synthesis of conjugated bile acids associated with the inflammatory-cancer transformation process. In conclusion, our study provided comprehensive bile acid metabolic fingerprinting in the liver–gut axis during the inflammation-cancer transformation process, laying the foundation for providing a new perspective for the diagnosis, prevention, and treatment of HCC.
## 1. Introduction
HCC is one of the most serious malignancy tumors threatening human health, the third leading cause of cancer-related death in the world [1,2,3]. Persistent inflammation leading to the formation of the tumor microenvironment is an important factor in the formation of HCC, whose mechanism is very complicated. The morbidity trend of HCC appears to be closely related to hepatitis B (HBV) infection, and it has been reported that HCC patients caused by HBV still account for more than half of the global cases [4,5,6]. Although the inflammation-cancer transformation process of “hepatitis-cirrhosis-HCC” has become a research highlight to reveal the pathogenesis of HCC, there is still no effective clinical treatment strategy. Hence, it has become important to clarify the pathogenesis of the process to achieve early diagnosis of HCC and identify new therapeutic targets.
Among the various endogenous metabolites originating from the co-metabolism of the liver–gut axis, bile acids (BAs) have received increasing attention because of their neoplasm-promoting properties [7,8,9,10]. BAs are synthesized in the liver, and the size and composition of the liver bile acid pool are closely regulated by translocation proteins [11]. When liver organic solute transporter-alpha/beta (OSTα/OSTβ) expression is downregulated, abnormal retention of BAs in hepatocytes within the organism occurs, leading to chronic liver injury [12], while patients diagnosed with HCC between 13 and 52 months concerning bile acid transporter deficiency resulted in a suppression of liver bile acid efflux [13]. In addition to the effect of liver bile acid accumulation on hepatocarcinogenesis, disruption of intestinal bile acid pool homeostasis can contribute to cancer development and a variety of chronic disease phenotypes. Elevated levels of secondary BAs in feces are capable of causing structural and functional abnormalities in the colonic epithelium through various mechanisms, including oxidative damage to DNA, activation of nuclear factor kappa-B, and enhanced cell proliferation [14]. However, depicting the spectrum of BAs and their interactions in plasma, liver, and intestine, covering the entire enterohepatic circulation, during the overall disease course of “health-hepatitis-cirrhosis-HCC” still requires research.
In this paper, based on an N,N-diethyl-1,4-butanediamine (DEABA) derivatization method for absolute quantification of BAs, the systematic bile acid profiles in plasma, liver, and intestine in the whole progression of HCC have been obtained. Combined with analysis of independent sample t-tests, principal component analysis (PCA), orthogonal partial least squares discrimination analysis (OPLS-DA), and bayesian linear discriminant analysis (BLDA), key BAs biomarkers were screened out to distinguish different disease stages, which was valuable for the early diagnosis of HCC. Next, gene set enrichment analysis (GSEA) and the cancer genome atlas (TCGA) database were employed to explore the effect of core genes on the distribution of bile acid pools, which was crucial for promoting the development of HCC. Our study revealed the change of BAs in the liver–gut axis during the inflammation-cancer transformation process and provided a novel perspective for treating HCC.
## 2.1. Histology Assessment and Total Bile Acid Features in the Inflammation-Cancer Transformation Process
Changes in total bile acid (TBA) levels can reflect the physiological status and injury degree of the organism. Studies have confirmed that the TBA profiles of patients with HCC have unique metabolic characteristics, and the homeostasis of TBA is dependent on liver synthesis and intestinal absorption [15,16]. To elucidate the etiopathogenesis of HCC underlying TBA metabolism disorders, the present study evaluated the various canceration stages of DEN-induced rats based on the results of hematoxylin-eosin (H & E) stained liver tissue sections and quantified the TBA level in rats plasma, liver, and intestine at different stages of HCC progression.
H & E staining showed that hepatocytes began to exhibit severe impairment in the 8th week compared to healthy controls (Figure 1A), termed the hepatitis stage (Figure 1B). The liver tissue was infiltrated with lymphocyte-dominated inflammatory cells, with a small amount of bile duct hyperplasia and localized vascular stasis. The cirrhosis stage occurred in the 12th week (Figure 1C), with an obvious structural disorder of liver lobules, the proliferation of perivenous connective tissue, formation of pseudo lobules with hepatocyte regeneration nodules, and bile duct hyperplasia. The 16th week was the initial stage of HCC (Figure 1D). Microscopically, hepatocyte empty valve degeneration and a small number of adenoid structures were observed, and a large amount of bile duct hyperplasia was visible. At the same time, massive vascular stasis and brownish-yellow pigmentation were observed. The 20th week was described as an advanced HCC stage (Figure 1E). The hepatic tissue showed obvious adenoid structures, all cells had enlarged deep-stained nuclei, and different degrees of vacuolar degeneration were observed.
Based on the histological results, we found that TBA levels significantly increased in all disease groups (Figure 2A). The TBA level of intestinal contents samples gradually decreased with disease progression, which showed an opposite trend to plasma and liver samples (Hepatitis & Cirrhosis vs. Control ** $p \leq 0.01$; HCC & Advanced HCC vs. Control * $p \leq 0.05$), while the TBA levels in plasma and liver gradually increased in all stages (* $p \leq 0.05$, ** $p \leq 0.01$). Therefore, we speculate that there is a close relationship between the inflammation-cancer transformation process and enterohepatic circulation.
To further analyze the specific reasons for the gradual decrease of TBA levels in the intestine, we subsequently analyzed total primary and secondary BAs in plasma, liver, and intestinal contents. We found that total primary and secondary BAs were markedly elevated in plasma and intestinal contents. However, we observed a specific phenomenon of elevated total primary BAs but decreased secondary BAs in liver samples only (* $p \leq 0.05$, ** $p \leq 0.01$). With the development of HCC, total primary BAs in the intestine decline in the advanced HCC stages, in contrast to the continuous increment of total primary BAs in the plasma and liver (Figure 2B). In addition, it is noteworthy that the total secondary BA level in plasma and liver showed an abnormal rebound at the advanced HCC stage, which was not seen in intestinal contents (Figure 2C).
## 2.2. Observing Liver–Gut Axis BAs Environment and Screening HCC Biomarkers for Early Diagnosis
To figure out the key driving BAs for the evolving of HCC, we quantified the changes in the levels of 5 free BAs (cholalic acid, CA; chenodeoxycholic acid, CDCA; ursodeoxycholic acid, UDCA; lithocholic acid, LCA; deoxycholic acid, DCA; Figure 3), and their associated 10 conjugated BAs in plasma, liver, and intestine (Figure 4). The quantitative results of 15 BAs in plasma, liver, and intestinal contents samples from different disease stages of HCC and healthy controls are included in Table S3, and the results are expressed as mean ± SD.
For free BAs, we found the same trend in three samples, with a significant increase in CA, CDCA, and DCA and a marked decline in LCA (* $p \leq 0.05$, ** $p \leq 0.01$). In addition, the different phenomena in UDCA are noteworthy, which were reduced in the liver and intestinal contents but elevated in plasma.
In rodents, free BAs are more likely to be coupled to taurine, the glycine-conjugated BAs accounting for a small proportion of conjugated BAs [17]. The report supports that glycine-conjugated BAs are present at low levels in rats [18]. Due to the low levels and some errors in the quantitative analysis, individual disease groups did not show significant differences compared to the control group. However, from an overall perspective, glycocholic acid (GCA), glycochenodeoxycholic acid (GCDCA), glycodeoxycholic acid (GDCA), glycolithocholic acid (GLCA), and glycoursodeoxycholic acid (GUDCA) all showed similar trends to their prototypes in three samples (Figure 4A–E). Taurocholic acid (TCA), taurochenodeoxycholic acid (TCDCA), tauroursodeoxycholic acid (TDCA), taurolithocholic acid (TLCA), and tauroursodeoxycholic acid (TUDCA) showed consistent trends concerning prototypic BAs only in plasma and liver. Surprisingly, all five taurine-conjugated BAs were reduced in the intestine and found a progressive decrease in TCA, TUDCA, and TDCA with disease progression (* $p \leq 0.05$, ** $p \leq 0.01$, Figure 4F–J).
Next, the association between discrepancies in bile acid levels and inflammatory-cancer transformation was established by two multivariate modeling approaches, PCA and OPLS-DA. The results showed that they were distinguished by respective disease stages. As the HCC progresses, the PCA score plot demonstrated a definite trend, confirming the potential of BAs to predict disease staging (Figure 5A–C). Next, combining the contribution degree of OPLS-DA (VIP > 1) and the significance of independent t-test ($p \leq 0.05$), CDCA, LCA, UDCA, and GLCA in plasma, and CDCA in liver and intestine were seen as biomarkers that have a positive role in the early diagnosis of HCC (Figure 5D–F). To date, liver biopsy is currently the gold standard for early diagnosis of HCC, but patient acceptance of this standard invasive technique is poor. A BLDA diagnostic model was constructed by CDCA, LCA, UDCA, and GLCA in plasma to achieve non-invasive detection. The coefficients of the four biomarkers and constants in the BLDA diagnostic model are listed in Table 1. By substituting the bile acid concentrations into the respective equations, the probability of being classified in the corresponding disease group was calculated. The result indicated a reliable model; $86.7\%$ of the samples could be correctly distinguished (Table S4).
## 2.3. BAAT was Associated with Altered Composition of the Intestinal BA Pool and Disruption of Enterohepatic Circulation
To explore the potential mechanisms of bile acid metabolism changes in HCC patients and to screen out valuable key target genes, 373 HCC samples and 50 healthy samples from the TCGA database were involved in the present analysis. GSEA enrichment analysis was used to screen out 15 gene sets related to the biological functions of bile acid (Table S5, Figure S1). We obtained 125 genes from 15 gene sets to import into the STRING database to complete the visualization of Protein-Protein Interaction (PPI) Networks with a confidence level > 0.4 (Figure 6). Finally, Cytoscape software was applied to calculate the key node genes based on the cyto Hubba plug-in and maximal clique centrality (MCC) algorithm, the most core gene BAAT was obtained, ranking first.
BAAT is the final modification before catalyzing the generation of conjugated BAs from free BAs into the enterohepatic circulation [19]. Evidence indicates that BAAT promotes glycine-conjugating BAs with extremely low efficiency but efficient conjugating with taurine in rats [20,21]. The TCGA database supports that BAAT is significantly under-expressed in HCC cases (** $p \leq 0.01$, Figure S2), suggesting that BAAT deficiency is partly responsible for the decrease in taurine-conjugated BAs in the intestine, which would alter the composition of the intestinal bile acid pool and increase its toxicity, thereby promoting the progression of inflammation to HCC.
## 3. Discussion
In recent years, HBV infection has progressively developed into a major cause of HCC. At the same time, 80–$90\%$ of new cases occur in the context of cirrhosis, suggesting that hepatitis and cirrhosis play important roles in the precancerous liver environment [22,23]. It is confirmed through research that early diagnosis of HCC by monitoring BAs may improve prognosis and the feasibility of curative treatment [24]. Meanwhile, the bidirectional communication of the liver–gut axis is an essential part of coordinating the dynamic balance of the bile acid pool in the body [25]. However, there are few existing articles describing whole bile acid profiling in enterohepatic circulation during the process of “hepatitis-cirrhosis-HCC”. The pathogenesis of HCC has not been clear till now. We clarified the four disease stages of HCC development based on the previous literature [26,27] and histopathological analysis, first achieving the dual coverage monitoring of the dynamic changes of bile acid levels and distribution during the enterohepatic circulation and the evolution of “hepatitis-cirrhosis-HCC”, and found that the imbalance of the enterohepatic circulation system was the key driver of the inflammation-cancer transformation process, which contributes to cognitive the pathogenesis of HCC.
This study indicated a significant sludge of BAs in the liver–gut axis, while TBA levels in plasma and liver are positively correlated with HCC progression. High levels of bile acid environment have been known to induce reactive oxygen species production and apoptosis in hepatocytes, further leading to impaired liver function [28]. It was accepted that the gradual accumulation of TBA is a major risk factor for the development of HCC, while it is well established that TBA levels and enterohepatic circulation profoundly influence each other [29]. Enterohepatic circulation is the process by which BAs pass from the liver to the intestine and then return to the liver through reabsorption from the portal vein [25,30]. The above process is intricately linked to processes that mainly undergo extensive feedback and feed-forward regulation by specialized absorption and excretion transport systems in the liver and intestine [31]. Furthermore, defective expression and function of bile acid export, as well as reabsorption, have been recognized as important causes of progressive cholestasis in the liver and plasma [32,33]. BAs in the above process are circulated through specialized absorption and excretion transport systems in the liver and intestine. Bile salt export pump (BSEP) and multidrug resistance-associated protein (MRP2) are key transport proteins for the hepatic efflux of BAs, while sodium bile acid/taurocholic synergistic polypeptide (NTCP) and organic anion transport peptide (OATP) are the main transport proteins in the liver responsible for uptake of circulating BAs in the portal vein [34,35]. Reports on patients with HCC also indicate that BSEP, MRP2, NTCP, and OATP expression is downregulated [29,36], corroborating the disruption of enterohepatic circulation in the development of HCC.
The intestine is the site of secondary BA synthesis. Primary BAs synthesized in the liver are further metabolized in the intestine [37]. We provide dysregulation of the primary and secondary BAs in the liver–gut axis, revealing a unique metabolic regulation of BAs in the intestine. The organic solute transporter-alpha/beta (OSTα/OSTβ) are exporters of BAs from the intestine and are an important link in enterohepatic circulation [38]. It has been confirmed in the literature [39] that the absence of OSTα/OSTβ expression causes an increased level of BAs in the intestinal contents as well as in the small intestine. Our quantitative results showed that total secondary BAs were most significantly elevated in the intestine, in addition to being equally elevated in plasma but reduced in the liver, a characteristic phenomenon that likewise suggests a deficiency of the liver bile acid transport system.
Mechanisms underlying the failure of the intestinal barrier and the development of a leaky gut are not fully understood. Still, abnormal retention of toxic BAs is recognized as an important contributing factor [40,41,42]. Secondary BAs are generated from primary BAs through reactions such as 7α-dehydroxylation, so they have the highest hydrophobicity compared to all BAs, a property thought to be linked to hepatotoxicity [43]. On the other side, secondary BAs and their derivatives are a major component of the intestinal bile acid pool, and their elevation represents a change in the toxicity of the intestinal bile acid pool [44]. With the progressive development of HCC, we concluded that due to the large accumulation of secondary BAs in the intestinal epithelium, the intestinal permeability is altered, which eventually causes intestinal fistula. Therefore, we believe that the phenomenon of an abnormal rebound of total secondary bile acids in plasma and the liver is caused by the development of intestinal fistula and the massive efflux of toxic substances accumulated in the intestine at the advanced HCC stage. The above processes also coincided with a progressive decrease of total and secondary BAs in the intestine of the disease group.
CA and CDCA are two primary BAs, and DCA and LCA are secondary BAs from their conversion, respectively. According to the report that the hydrophobic-hydrophilic balance of BAs is closely related to metabolic homeostasis in vivo [45], more hydrophobic BAs can act as cancer promoters and further amplify the development of HCC [46,47]. The high hydrophobicity of CDCA and DCA makes them cytotoxic and pro-inflammatory [48,49]. CA is not highly hydrophobic, but studies have shown that feeding mice with CA increases the size and hydrophobicity of the bile acid pool while causing cholestasis and hepatic steatosis [50]. LCA also has hydrophobic properties, but that’s a small fraction of BAs. UDCA is a primary bile acid in rats, a non-toxic hydrophilic bile acid [51]. Evidence supports the ability of UDCA to accelerate enterohepatic circulation and its cytoprotective properties [52,53]. Therefore, the elevation of CA, CDCA, and DCA in the liver and intestine and the downregulation of LCA and UDCA imply a hydrophobic change in the composition of BAs and a progressive accumulation of toxic BAs that inhibit the enterohepatic circulation. Bile flow is primarily dependent on the drive of conjugated BAs. Congenital defects in BA conjugating can lead to malabsorption of fat-soluble vitamins and, thus, severe liver disease [54,55]. BAAT is the key enzyme capable of mediating bile acid coupling [19]. As mentioned earlier, it has been demonstrated that BAAT -/- mice are almost completely devoid of taurine-conjugated BAs in the liver, suggesting that BAAT is the primary taurine-coupled enzyme in mice [56,57]. Our figures showed that the TBA level in the intestines remained significantly elevated. At the same time, all the taurine-conjugated BAs were continuously reductive in the intestine of model rats. We speculate that the down-regulation of BAAT expression is the key reason for the above phenomenon. Consistent with this, the gene enrichment results confirm our previous speculation about the variation of taurine-conjugated BAs level in the intestine.
## 4.1. Reagents
Acetonitrile, isopropanol, and methanol were purchased from Fisher Scientific (Fair Lawn, NJ, USA), while formic acid, dimethyl sulfoxide, and acetone were purchased from Yuwang Co. Ltd. (Yucheng, China). The distilled water used in the experiments was purchased from Wahaha Group Co., Ltd. (Hangzhou, China). DEN used in animal experiments was purchased from Sigma-Aldrich (St. Louis, MO, USA).
The commercial standards selected for this study, the bile acid used for quantitative analysis, their abbreviations, CAS numbers, and manufacturers are included in Table S1.
## 4.2. Animals
For this study, Wistar male rats, weighing 100 ± 20 g, purchased from by the Animal Ethical Committee of Changsheng Biotechnology (IACUC No. CSE202106002), were used and provided a constant relative humidity of 65 ± $15\%$ and a temperature of 23 ± 2 °C environment with 12 h-light dark cycles. At the same time, the rats have full access to food and water. The rats were fed and acclimatized to their environment for one week prior to the experiment. Then, 64 rats were randomly divided into two groups, the HCC model group and the healthy control group. Rats in the model group ($$n = 32$$) were injected intraperitoneally with DEN solution at a dose of 70 mg/kg once a week for 10 weeks, while rats in the control group ($$n = 32$$) were injected intraperitoneally with an equal volume of saline as a control.
## 4.3. Histopathological Analysis
Liver tissue sections were deparaffinized with xylene and dehydrated in ethanol. Making tissue into 3 µm slice samples and then stained with H&E. Images were acquired using a NIKON digital sight DS-FI2 imaging system after observation with a NIKON Eclipse ci optical microscope.
## 4.4. UFLC-MS/MS Conditions for Quantitation of BAs
A previously published method by our group was used to quantify the BAs [58]. The method was based on a polar response homogeneous dispersion strategy with DEABA labeling, which reduces the polarity and response gap of the analytes and improves selectivity compared to non-derivatization. The ultra-performance liquid chromatography—tandem mass spectrometer (UPLC-MS/MS) systems and chromatographic column were used for the analysis, and liquid phase conditions can be found in previous methods. The positive ion gradient elution program was: 0.01–10.00 min, $20\%$B→$50\%$B; 10.00–17.00 min, $50\%$B→$85\%$B; 17.00–22.00 min, $85\%$B→$90\%$B. The negative ion gradient elution program was 0.01–4.00 min, $20\%$B→$35\%$B; 4.00–6.00 min, $35\%$B→$70\%$B; 6.00–10.00 min, $70\%$B→$85\%$B. 10.00–10.10 min, $85\%$B→$90\%$B, and continued with $90\%$ B running at 10.10–12.00 min.
We used the electrospray ionization (ESI) source in both positive and negative ion form to accomplish the analysis and determination of BAs by multiple reaction monitoring (MRM) modes. The ion spray voltage was 5500 V(+)/4500 V(−), and the other parameters of the mass spectrum were as follows: curtain gas (N2), 20 psi; nebulizer gas (gas 1, N2), 50 psi; heater gas (gas 2, N2), 50 psi; and source temperature, 500 °C(+)/500 °C(−). The corresponding mass spectrometer (MS) parameters for the 15 BAs can be found in Table S2.
## 4.5. Sample Collection and Pretreatment
For plasma samples, the whole blood samples were collected from each group following forbidden food for 12 h, placed in heparinized sterile eppendorf tubes, and centrifuged at 10,142× g for 10 min at 4 °C to transfer plasma. Then, BAs were extracted from plasma samples as described in the previous method [58].
For liver samples, rats in each group were killed by cervical dislocation after plasma collection. Liver tissue was immediately peeled out, bathed in physiological saline, blotted through filter paper, and transferred to a dry ice box soon afterward. Liver tissue samples (50.00 ± 0.50 mg each) were homogenized in 100 μL physiological saline for 5 cycles (5 s at 300 w, with 3 s between each cycle) by using an ultrasonic cell disruptor (JY92-IIDN, SCIENTZ, Zhengjiang, China) in an ice bath. One liver homogenate was added to 10 µL of internal standard and 10 µL of methanol, the same internal standard used for plasma samples. After vortex shaking for 30 s, 500 µL of precipitated protein reagent, methanol:isopropanol (v/v, 1:2), was added. The homogenate was centrifuged (4 °C, 10,142× g) with vortex shaking for 5 min for 10 min, and the upper layer was dried under a stream of nitrogen. The dried liver samples were derivatized in the same manner as the plasma samples and then subjected to subsequent analysis.
For intestinal contents samples, on the day before the rats were killed, the rats were placed in metabolic cages to collect 24 h intestinal contents. The collected intestinal contents samples were lyophilized for 48 h and ground into powder. 50 mg ± 0.50 mg was taken from intestinal contents lyophilized powder and spiked with 500 µL of physiological saline, then vortexed for 10 min to obtain Intestinal contents homogenate. The pretreatment procedure for intestinal contents samples was approximately the same as for liver samples. The difference is that for protein precipitation, 600 µL of methanol:acetonitrile:acetone (v/v/v, 1:1:1) was added to the intestinal contents sample, and the supernatant before drying was filtered through 0.22 μm organic filter membrane. The dried intestinal contents sample were derivatized in the same manner as the plasma samples and then subjected to subsequent analysis.
## 4.6. Gene Enrichment Analysis
We collected samples from The TCGA genomic data commons data portal (https://portal.gdc.cancer.gov/ (accessed on 15 September 2022)) and obtained their RNA sequencing fragments per kilobase million data.
In this study, we selected the gene sets associated with biological functions of bile acid (shown in Table S3) from the GSEA data set (https://www.gsea-msigdb.org/ (accessed on 5 September 2022)) and performed enrichment analysis between the two groups by GSEA software (version 4.2.3). Among them, gene sets whose p-value < 0.05, false discovery rate (FDR) < 0.05, and normalized enrichment score (NES) > 1.5 were collected for subsequence procession. We visualized the PPI network using STRING 11.5 (https://cn.string-db.org/ (accessed on 18 November 2022)) and the cytoHubba plug-in of Cytoscape (version 3.9.1) software for screening key genes.
## 4.7. Statistical Analysis
*The* generated raw data files were processed using the Analyst® application (version 1.5.1, AB SCIEX™, Foster City, CA, USA), based on which standard curves were created, and all BAs were quantified. The significant differences between the experimental groups were determined using the SPSS Statistics (version 26.0, CHI, Chicago, IL, USA) and GraphPad Prism (version 9.2.0, GraphPad Software Inc., San Diego, CA, USA). The BLDA discriminant analysis was carried out with SPSS software, while PCA and OPLS-DA analysis used the SIMCA-P program (version 14.1, Umetrics, Malmö, Sweden). When the p-value < 0.05 or less, we considered the data evidently different and statistically significant.
## 5. Conclusions
In this study, we achieved a dual coverage monitoring of the bile acid profile in the liver–gut axis throughout the whole inflammation-cancer transformation progression. We found that the enterohepatic circulation is disrupted during HCC development after intensively researching the differences in levels of TBA, primary/secondary BAs, and single BAs. Next, we used GSEA gene enrichment analysis to obtain the key node gene BAAT, which dominates the synthesis of taurine-conjugated BAs in rats. We also validated our specific phenomenon of taurine-conjugated BAs in the intestine.
In summary, our results suggest that the disruption of the enterohepatic circulation in the internal environment is an important factor dominating the inflammation-cancer transformation process. The lack of BAAT may be one of the potential mechanisms interrupting the enterohepatic circulation. Additionally, we developed the BLAD diagnostic model, and found that GLCA, CDCA, UDCA, and LCA in plasma samples can be used as biomarkers to distinguish the different disease stages of HCC, enabling early diagnosis of HCC from the perspective of non-invasive detection. However, immunotherapy has been a hot research topic for treating HCC. It has been recently suggested that regulatory T cells, the most abundant immunosuppressive cell population of the HCC-related tumor microenvironment, might suggest a potential target for HCC immunotherapy [59]. Evidence supports that intestinal flora influences the differentiation, accumulation, and function of regulatory T cells [60], and the influence of intestinal flora on BAs metabolism is well established [61,62]. In future studies, it is of great interest and necessity to focus on the link between BAs metabolism, intestinal flora, and the immune cell population of the tumor microenvironment, which will contribute to the further development of HCC therapy. |
# A Cluster-Randomised Stepped-Wedge Impact Evaluation of a Pragmatic Implementation Process for Improving the Cultural Responsiveness of Non-Aboriginal Alcohol and Other Drug Treatment Services: A Pilot Study
## Abstract
There is limited evidence regarding implementing organisational improvements in the cultural responsiveness of non-Aboriginal services. Using a pragmatic implementation process to promote organisational change around cultural responsiveness, we aimed to (i) identify its impact on the cultural responsiveness of participating services; (ii) identify areas with the most improvement; and (iii) present a program logic to guide cultural responsiveness. A best-evidence guideline for culturally responsive service delivery in non-Aboriginal Alcohol and other Drug (AoD) treatment services was co-designed. Services were grouped geographically and randomised to start dates using a stepped wedge design, then baseline audits were completed (operationalization of the guideline). After receiving feedback, the services attended guideline implementation workshops and selected three key action areas; they then completed follow-up audits. A two-sample Wilcoxon rank-sum (Mann–Whitney) test was used to analyse differences between baseline and follow-up audits on three key action areas and all other action areas. Improvements occurred across guideline themes, with significant increases between median baseline and follow-up audit scores on three key action areas (median increase = 2.0; Interquartile Range (IQR) = 1.0–3.0) and all other action areas (median increase = 7.5; IQR = 5.0–11.0). All services completing the implementation process had increased audit scores, reflecting improved cultural responsiveness. The implementation process appeared to be feasible for improving culturally responsive practice in AoD services and may be applicable elsewhere.
## 1. Introduction
There are substantial inequalities in health status and health care access between Aboriginal and Torres Strait Islander people (hereafter referred to as Aboriginal) and non-Aboriginal people in Australia [1], including disproportionate drug and alcohol-related morbidity and mortality [2]. Although all health services should be culturally safe, effective and welcoming to Australians from any cultural backgrounds, there is evidence that Aboriginal people receive less benefit from non-Aboriginal health services than non-Aboriginal people [3]. Ensuring that mainstream health services (that is, services that are not specifically developed for Aboriginal people) are responsive to Aboriginal peoples’ needs is a key strategy to reduce inequalities in healthcare access and enhance the quality of care provided to Aboriginal people [4,5,6]. Cultural responsiveness is an ongoing process of adapting systems, services and practice to fit with culturally diverse user preferences [7], and providing high-quality care that is culturally appropriate and safe [8]. While the importance of culturally responsive health services is well acknowledged [8], there is a lack of consensus on effective methods to develop health services that are culturally responsive [6].
Cultural responsiveness initiatives have been shown to improve healthcare worker cultural knowledge, awareness and sensitivity [9,10,11,12], improve patient satisfaction with providers [9,12,13] and increase access and frequency of visits by Aboriginal people [14]. However, the quality of many existing studies is low, frequently using observational study designs and interventions that provide one-off staff training, but which tend to be ineffective if not implemented as part of a systematic approach [6,15].
While many non-Aboriginal clinicians are individually committed to practising in a culturally responsive way, improving cultural responsiveness needs to be a whole-of-service activity that involves multiple strategies across all levels of the workforce and organisational policy, management and practices to be effective [5,6,16,17,18]. There is limited evidence, particularly in the Australian context, regarding effective systematic methods for implementing organisational-level change to improve cultural responsiveness [6]. Aiming to provide a structured method to implement best-evidence cultural responsiveness practices, the current project developed a pragmatic implementation process for facilitating organisational change in services. The first step of this process involved combining a number of recommended cultural responsiveness strategies [19] into a best-evidence guideline for improving the cultural responsiveness of non-Aboriginal AoD services [20]. The co-designed best-evidence guideline details a wide range of evidence-based strategies including: engaging management [21,22]; enhancing communication and relationships between mainstream and Aboriginal services and communities [3,23,24]; improving staff knowledge of the social and historical determinants of health [25]; and tailoring programs to suit the local community [26].
The core components, or themes, of the guideline were then operationalised into flexible activities that could be tailored to suit each service [27,28,29], and implemented in non-government organisation (NGO) non-Aboriginal AoD services. The implementation fidelity, barriers and facilitators to implementation, and their acceptability and feasibility, are described elsewhere [27,30]. The current study aims to identify the impact of the implementation process on the cultural responsiveness of participating services, as measured by the mean change in audit scores from baseline to three-month follow-up. Secondary aims were to identify the areas of the guideline that were most frequently selected as priority areas for change and most successfully actioned by services during the project. We also aimed to build on the services’ insights to develop a program logic to identify how the standardised core components were flexibly applied by services to support future implementation.
## 2.1. Study Design
The project was co-designed and implemented using a community-based participatory research approach [29,31] that facilitated iterative development of the best-evidence guideline and the pragmatic implementation process through collaboration between the project team (RW and JA, who have experience working in NSW AoD services), the researchers (SF, AH, AS), the Network of Alcohol and other Drugs Agencies (NADA; the peak organisation for the NGO AoD sector in NSW), the Primary Health Networks (PHNs) as the project funders and an Aboriginal Advisory Group (which included Aboriginal community members with professional and community connections to NGO AoD treatment services or government treatment services). The project was overseen by the Aboriginal Advisory Group to ensure the priorities and world views of Aboriginal experts were centralised into the guideline and the project implementation. Members of the Group were offered reimbursement for expenses arising from their involvement. Project implementation expenses were covered by the project. The impact of the project on the cultural responsiveness of participating services was evaluated using a cluster-randomised stepped-wedge design with 12 services and six clusters.
## 2.2. Participating Services
Seventeen non-Aboriginal NGO AoD treatment services from six PHN districts in New South Wales (NSW) were identified by the PHNs as being potentially willing to participate, with fifteen providing formal consent to participate ($88\%$) (hereafter referred to as participating services). Participating services included a variety of AoD service types, including residential rehabilitation ($$n = 3$$), day programmes ($$n = 2$$), centre-based counselling and support ($$n = 3$$), outreach counselling and support ($$n = 4$$), groupwork and phone support ($$n = 1$$) and group or individual youth services ($$n = 2$$). Twelve services completed all project activities ($80\%$; Table 1). No data related to Aboriginal clients or organisations were accessed or used in this phase of the project.
## 2.3. Cultural Responsiveness Project
The project was delivered in these sequential phases: (i) engage stakeholders, develop co-design structures and secure approvals from ethics and participating sites; (ii) co-design the implementation process and best-evidence guideline; (iii) implement the guideline and monitor uptake. Phase 1 and the process evaluation outcomes are described in detail elsewhere [27] and the co-design, implementation and monitoring steps are described below (phases 2 and 3). Aboriginal author RW was involved in all aspects of the project and was provided training in research methods, manuscript development and presenting research findings. Non-Aboriginal members of the research team (JA, AH, SF, AS) have extensive experience of working with Aboriginal communities over multiple projects and have completed training in cultural responsiveness. RW provided cultural mentoring to non-Aboriginal researchers. Findings from the project were presented to participating services and local Aboriginal peak bodies via ongoing discussions about the project and at formal events, such as the Aboriginal Corporation Drug and Alcohol Network of NSW (ACDAN) Symposium.
## 2.4. Co-Designed Best-Evidence Guideline for Cultural Responsiveness in Non-Aboriginal AoD Services (Phase 2)
A best-evidence guideline that describes key elements of culturally responsive service delivery in non-Aboriginal AoD treatment services was co-designed at the beginning of the project and this process is described fully in the guideline document [20] (See Supplementary File S1 or also published online at https://www.nada.org.au/resources/alcohol-and-other-drugs-treatment-guidelines-for-working-with-aboriginal-and-torres-strait-islander-people-in-a-non-aboriginal-setting/ (accessed on 1 June 2022)). Briefly, the guideline co-design process was facilitated by an Aboriginal project team member (RW) and overseen by the Aboriginal Project Advisory Group [27]. The guideline identifies six themes: [1] *Creating a* welcoming environment, [2] Service delivery, [3] Engagement with Aboriginal organisations and workers, [4] Voice of the community, [5] Capable staff, and [6] Organisation’s responsibilities.
## 2.5.1. Clustering of Participating Services and Randomisation to a Starting Date
Services were clustered based on PHN district/geographical region ($$n = 6$$). Each cluster of services was randomised to an implementation starting date between June and October 2019, with approximately one month between clusters, as shown in Table 2. Cluster randomisation was conducted by a statistician independent of the project using random number generation. Owing to varying numbers of services within regions, and attrition, clusters included different numbers of services; cluster 1 included one service, cluster 5 included three services and the remaining clusters included two services each.
The following implementation and monitoring steps were completed with each participating service.
## 2.5.2. Baseline Audits of Participating Services
Services were advised of their allocated start date and structured baseline audits of current culturally responsive practice, using a standardised audit tool, were completed individually with each participating service. The audit process identified the extent to which services addressed the guideline, rating cultural responsiveness according to 21 actions areas which corresponded with the six guideline themes. Audit tools were developed which framed the 21 action areas as questions in order to collect information from staff at participating services. Audits were conducted by two trained auditors (RW, JA or another trained auditor) in the setting where the service is delivered and took between 90 min to two hours to complete. Auditors were independent of the service being audited and at least one auditor at each audit was Aboriginal.
## 2.5.3. Audit Feedback to Participating Services
Individualised written feedback from the audit findings was provided to each participating service, listing all guideline action areas with a descriptive assessment for each area reflecting the level of evidence observed during the audit (limited, some, good or excellent) and recommendations for areas where potential improvements could be made.
## 2.5.4. Guideline Implementation Workshops with Participating Services
Implementation workshops were held with key staff from services (CEOs/managers and direct service delivery staff) to explain the guideline, review the audit feedback, set goals for improvement and develop a detailed action plan tailored to their service (to operationalise action areas from the guideline themes). Workshops were facilitated by JA and RW. Staff identified and prioritised specific activities that they would implement from the 21 action areas and were encouraged to select three key action areas for their service to progress over the next three months. For example, activities that operationalise guideline Theme 1: *Creating a* welcoming environment, might include processes to ensure that all clients are welcomed respectfully at first contact with the service, providing tea/coffee/water in the waiting room, accommodating children or other family members in the service, or displaying local Aboriginal artwork. These self-designed activities provide flexibility in how individual services operationalised and implemented the core components, enabling the practice change activities to be tailored to the needs and resources of individual services and the communities they serve [1,2,3,4,5].
## 2.5.5. Follow-Up Audits of Participating Services
Follow-up audits of services were conducted after three months to assess change in culturally responsive practices in the 21 action areas, following the same procedure as for the baseline audits. Where possible, the same service staff attended the follow-up audit. Services were provided with a second individualised feedback report, including discussion of any changes that had occurred.
## 2.6. Measures
To privilege Aboriginal values and views throughout analysis and reporting, we used the guideline themes that were developed by the Aboriginal Advisory Group to assess culturally responsive practices. The study aimed to identify the impact of the project on the cultural responsiveness of services using the following outcomes:Change in audit score from baseline to follow-up audit on the three key action areas identified by staff at the implementation workshops (possible score 0–9).Change in audit score from baseline to follow-up audit in all other action areas from the guideline (other than the three key action areas selected by each service (possible score 0–54).
## 2.7. Statistical Analysis
The audit responses provided by staff were recorded into the audit tool. After each audit was completed, ratings of 0–3 were allocated to each of the 21 audit criteria, according to pre-specified rating rules, by one of the researchers conducting the audit (RW). A second researcher (SF) then independently reviewed the audit tool and rated the 21 criteria. The two sets of ratings were compared and any disagreement around ratings were resolved by discussion, until a consensus was reached. A two-sample Wilcoxon rank-sum (Mann–Whitney) test was used to analyse the difference in audit scores between baseline and follow-up audits, on the three key action areas (outcome 1) and all other action areas (outcome 2). All analyses were conducted using Stata 16 [32].
The extent of change across the six guideline themes was identified by summing item ratings within each theme and calculating the rates of change for each theme. The frequency with which each individual action area was selected by service staff (during workshops to operationalise their improvement goals), and whether improvements were subsequently observed in those action areas, were descriptively explored.
## 2.8. Development of a Program Logic
We used a program logic structure developed in previous work [5,7] to build a standardised logic model specifically for improving cultural responsiveness in non-Aboriginal NGO AoD services. The program logic model was developed by reviewing the audit findings and activities chosen by staff during the workshop and linking these to the core components (themes) of the guideline.
## 3.1. Implementation Process
Twelve of the fifteen participating services completed all service-specific project components. Some delays in completing the three-month follow-up audits occurred, with an average time between audits of 18 weeks (range 14–28 weeks) (See Table 2). The longest delays in completing the follow-up audits were for services “J”, “D” and “F”, with the audits completed at 19, 24 and 28 weeks, respectively. Service “B” was part of cluster 5; however, due to delays in completing the baseline audit, service “B” ultimately completed the project components in line with cluster 6. Further detail on implementation and process outcomes are reported elsewhere [27].
## 3.2. Change in Cultural Responsiveness of Services in Three Key Action Areas
Outcomes are reported for services that completed baseline and follow-up audits ($$n = 12$$).
Ten of 12 services increased their audit score on their three key action areas at follow-up. The median follow-up scores were statistically significantly higher than the median baseline scores (median change = 2.0, IQR = 1.0–3.0, z = −2.79, $p \leq 0.005$) (Table 3).
## 3.3. Change in Cultural Responsiveness of Services in All Other Action Areas
All 12 services showed an increase in score on all other action areas (excluding the three key action areas). The median follow-up scores were statistically significantly higher than the median baseline scores (median change = 7.5, IQR = 5.0–11.0, z = −1.97, $p \leq 0.05$) (Table 3).
## 3.4. Guideline Themes with the Most Improvement
Overall, there were improvements in scores across all six themes of the guideline and all showed similar rates of improvement; Theme 5: Capable staff (+$22\%$), Theme 3: Voice of the community (+$18\%$), Theme 6: Organisation’s responsibilities (+$18\%$), Theme 1: *Creating a* welcoming environment (+$17\%$), Theme 2: Service delivery (+$16\%$) and Theme 4: Engagement with Aboriginal organisations and workers (+$16\%$).
## 3.5. Action Areas Most Frequently Selected (by Staff) and Most Frequently Improved
Service staff chose a wide variety of action areas from the guideline to prioritise; 16 of the 21 areas were selected at least once. Those most frequently selected as key action areas were: 1B: The physical environment is welcoming to Aboriginal people ($$n = 6$$ services); 3Ai: Aboriginal community engagement to develop relationships ($$n = 4$$); 3Aiii: Local history and protocols are reflected in practice and/or policy ($$n = 5$$); and 4A: Developing connections with Aboriginal organisations and workers ($$n = 5$$) (see the guidelines in Supplementary File S1 for further description). The action areas that services most frequently improved on were in Theme 2 (2B: Immediate triage options are available for Aboriginal people ($$n = 8$$ services) and 2C: Staff are culturally responsive in therapeutic practice ($$n = 7$$)), Theme 3 (3B: Local Aboriginal protocols are reflected in practice and/or policy ($$n = 7$$)) and Theme 6 (6Aii: There are Aboriginal-identified positions and Aboriginal publications and networks are used to advertise jobs ($$n = 7$$), 6Aiii: Service induction includes materials about working with Aboriginal people and materials are developed/reviewed by a local Aboriginal person ($$n = 8$$)).
## 3.6. Development of a Program Logic
To facilitate future implementation of improvements to cultural competence, the researchers (AH, AS, SF) developed a program logic that is directly tied to guidelines, shown in Table 4. This program logic was developed post-implementation to clearly delineate the standardised core components (guideline themes), flexible components (service level activities) approach and likely mechanisms of change, for future iterations of this project, based on previous work by the authors [28,29]. The second column lists the six best-evidence themes/principles that comprise the core components of the guideline. These are standardised across all services, as are the aims/goals/target areas for improvement (first column) and the articulation of why these core components would impact on cultural responsiveness (third column). The fourth column provides examples of specific activities that services can implement, with flexibility to choose practice change activities tailored to the needs and resources of individual services [28,31,33,34,35]. The remaining columns identify the measures of processes (the extent to which services engaged in the intervention process), outcomes (the extent to which indicators of culturally responsive practice improved) and data sources.
## 4. Discussion
The current study used a community-based, participatory research approach to develop a best-evidence, service-level practice change process, supported by a program-logic framework previously developed by the authors [28,29]. The pragmatic implementation approach is supported by existing evidence [33,34,35] and means that individual services could implement areas of the guideline that were most relevant to their local context and current level of cultural responsiveness. All participating services increased their overall audit scores and most increased scores on their chosen priority action areas, reflecting an increase in compliance with the guidelines and improved cultural responsiveness. The results are consistent with previous research demonstrating that audits and practice improvement interventions can be effective methods of identifying where improvements are needed, engaging with workers, and improving culturally responsive practices in a variety of health settings [14,36,37,38]. Our results support the effectiveness of the guideline and implementation process as a meaningful way of identifying and operationalising best-evidence principles of cultural responsiveness and enabling staff to understand and enact components of the guidelines that were relevant to their service. The program-logic model links the best-evidence core components of the guideline to the flexible service level activities, likely mechanisms of change, processes and outcomes, and can be used to guide future work on improving cultural responsiveness in AoD and potentially other health and human services. Our approach demonstrates a process to improve the cultural responsiveness of service delivery, and it is hoped that this approach impacts on inequalities in health status and healthcare access between Aboriginal and non-Aboriginal people.
An important strength of the project is that the co-design and implementation was led by an Aboriginal researcher and AoD worker, with extensive consultation with senior Aboriginal AoD clinicians (via the Aboriginal Advisory Group), funders, researchers, as well as links with workers (via the peak organisation for the NGO AoD sector) [6,21,24]. Furthermore, the guidelines recommend multiple evidence-based strategies across all organisational levels [6,16], such as: tailoring of service delivery to local communities [26]; enhancing relationships with Aboriginal services and communities [3,23,24]; improving staff knowledge and competency [25]; and implementing organisation-wide policies and practices [39]. The improvements in audit scores observed across all themes of the guideline, indicating that these concepts and strategies were clearly operationalised. Improvements were frequently observed in areas related to enhancing relationships with Aboriginal communities (e.g., having local Aboriginal protocols reflected in practice and/or policy), improving staff knowledge and skills (e.g., improved crisis triage options and staff demonstrating cultural responsiveness in direct service delivery) and organisation-wide policies or practices (e.g., including materials about working with Aboriginal people in service induction training). A larger evaluation with a longer timeframe would allow a more detailed exploration of specific components of the audits and guidelines and whether there are critical activities that services can enact to improve cultural responsiveness.
In addition to measuring change in audit scores (reflecting change in cultural responsiveness), future studies should also aim to examine the impact of these changes on service delivery or utilisation outcomes, potentially through using routinely collected administrative data. Previous reviews of cultural responsiveness programs have highlighted the need for valid indicators of change and objective outcome measures [6,12], and routinely collected administrative data represent objective, pragmatic, low-cost and easily tracked outcomes. As services improve their levels of cultural responsiveness, we would hope to also see improvements in service utilisation by Aboriginal people (for example, the number of episodes of care provided to and completed by Aboriginal people). The short time frame of the current evaluation limited our ability to examine these types of outcomes. Not only was the time for services to enact changes limited [27], but three months was likely not sufficient for any changes implemented to impact on service utilisation or client outcomes.
The project used a methodologically strong assessment process involving a standardised audit tool that reflected the best-evidence guidelines and a double-scoring system to enhance inter-rater reliability [40]. The possibility of practice effects should be noted; service staff may have had a more thorough knowledge of the audit criteria after completing the baseline audit, leading to more positive reporting of activities in follow-up audits. The practical implication of this is that some of the improvement in follow-up audit scores may be due to improvements in staff understanding of the audit, rather than the specific cultural competence activities they enacted. This is an issue about the true mechanisms of change: it is likely that the observed changes in cultural responsiveness are a combination of both the activities themselves and greater familiarisation with the audit process and content. Some services had limited capacity for improvement in audit scores for their three priority action areas; three services chose to prioritise an area that already had a full score at baseline, and one service only selected two priority areas. For future implementations, services should be encouraged to choose priority areas that have room for improved practice, providing maximum opportunity for improvements.
Participating services were self-selected, and it is possible that they may have had a pre-existing active interest in and/or resources to dedicate to improving their cultural responsiveness. The significant improvements in audit scores achieved by these services may not occur so quickly in other services. However, the participating services do represent a broad geographic and demographic area of NSW (including both urban and regional locations), as well as a variety of service delivery types. Service frontline and managerial staff rated the project as highly acceptable [27]. A key next step is a longer-term follow-up of participating services to establish whether the improvements in culturally responsive practice can be maintained or extended over time. Importantly, in line with the logic model presented, this will include an examination of administrative data to assess any changes in service utilisation. Then, if indicated, a randomised controlled trial evaluation of the implementation process in a larger sample of services may be warranted to demonstrate the generalisability, and costs and benefits of the process.
## 5. Conclusions
The co-designed best-evidence guideline and pragmatic implementation process represents a feasible and acceptable method [27] for implementing service-wide improvements in cultural responsiveness and may be applicable to improving the cultural responsiveness of a wide variety of health and human services. The randomised stepped-wedge evaluation design, double-rated audit scoring, and standardised core intervention increased methodological rigour, while the flexibility with which individual services can operationalise and implement the guidelines allowed tailoring to available resources and needs. |
# The Novel RXR Agonist MSU-42011 Differentially Regulates Gene Expression in Mammary Tumors of MMTV-Neu Mice
## Abstract
Retinoid X receptor (RXR) agonists, which activate the RXR nuclear receptor, are effective in multiple preclinical cancer models for both treatment and prevention. While RXR is the direct target of these compounds, the downstream changes in gene expression differ between compounds. RNA sequencing was used to elucidate the effects of the novel RXRα agonist MSU-42011 on the transcriptome in mammary tumors of HER2+ mouse mammary tumor virus (MMTV)-Neu mice. For comparison, mammary tumors treated with the FDA approved RXR agonist bexarotene were also analyzed. Each treatment differentially regulated cancer-relevant gene categories, including focal adhesion, extracellular matrix, and immune pathways. The most prominent genes altered by RXR agonists positively correlate with survival in breast cancer patients. While MSU-42011 and bexarotene act on many common pathways, these experiments highlight the differences in gene expression between these two RXR agonists. MSU-42011 targets immune regulatory and biosynthetic pathways, while bexarotene acts on several proteoglycan and matrix metalloproteinase pathways. Exploration of these differential effects on gene transcription may lead to an increased understanding of the complex biology behind RXR agonists and how the activities of this diverse class of compounds can be utilized to treat cancer.
## 1. Introduction
Retinoid X receptor (RXR) agonists bind to and activate the nuclear receptor RXR. RXR is a type II nuclear receptor, which is found in the nucleus bound to DNA and corepressor proteins [1,2]. Upon activation by a ligand, conformational changes in the structure of RXR promote dissociation of corepressor proteins and recruitment of diverse coactivator proteins. Because of its flexible dimerization domain, RXR homodimerizes or heterodimerizes with other nuclear receptors, including peroxisome proliferator-activated receptor (PPAR), liver X receptor (LXR), pregnane X receptor (PXR), or vitamin D receptor (VDR), to initiate transcription [3]. Upon activation, RXR regulates the transcription of target genes, involved in proliferation, differentiation, survival, and immune cell function [4].
Bexarotene is an RXR agonist, currently FDA approved to treat cutaneous T cell lymphoma (CTCL) [5]. Bexarotene has been tested in clinical trials for breast and non-small cell lung cancer but failed to attain approval for these indications, despite promising responses in some patients and manageable side effects [6,7]. Many have sought to improve the efficacy of bexarotene via novel drug delivery systems and formulations [8] or have made structural modifications to identify new RXR agonists [9,10]. Our new analog, MSU-42011, is effective for treatment in the MMTV-Neu model of HER2+ breast cancer [11], an established mouse model which recapitulates the human disease, as has been validated by gene expression profiling [12,13]. This model expresses wild-type, unactivated Neu in mammary tissue under the mouse mammary tumor virus (MMTV) promoter [14]. MSU-42011 also effectively reduces established tumor burden in the A/J mouse model of carcinogen-induced lung cancer [9]. In both of these preclinical models, changes in immune cell populations differed in the tumors of mice treated with MSU-42011 vs. bexarotene [9], suggesting that these compounds have distinct patterns of immunomodulatory activity.
Nuclear receptor biology is complex, and gene transcription varies based on the nuclear receptor binding partner of RXR [15]. For example, target pathways under the control of RXR:RAR heterodimers include genes which induce the enzymes phosphoenolpyruvate carboxykinase (PEPCK) and tissue transglutaminase 2 (TG2), immune-related genes such as B cell translocation gene 2 (Btg2), and retinoic acid response genes such as aberrant cellular retinol binding protein 1 (Crbp1) and cellular retinoic acid-binding protein 1 (Crabp1) [16]. *Several* genes involved in lipogenesis (Agpat2, Acsl1, Gpat3) and glucose metabolism (Hk2, Taldo1) are regulated by RXR:PPAR dimerization in adipocytes [17]. VDR, another nuclear receptor for which RXR is an obligate heterodimer, regulates expression of an extensive list of genes which act as VDR response elements. In quiescent hepatic stellate cells, binding of calcipotriol to the VDR nuclear receptor initiates binding to a cistrome of 6281 target sites, which expands to 24,984 sites when these cells are activated by lipopolysaccharide (LPS) or transforming growth factor beta (TGFβ) [18]. Through dimerization with the PXR nuclear receptor, RXR regulates transcription of genes involved in xenobiotic and endobiotic metabolism, cytoprotective mechanisms, and detoxification, including enzymes such as CYP3A4 and efflux pumps such as MDR1 [19,20]. Because the network of nuclear receptor target genes is vast, the biological effects of RXR activation are numerous and diverse.
Others have previously investigated the effects of bexarotene on the transcriptional regulatory network in mammary glands of mouse models of breast cancer [21], but to date no one has analyzed gene expression data from tumors treated with different RXR agonists. To this end, we used RNA sequencing to compare pathways activated by treatment with MSU-42011 versus pathways activated by bexarotene and validated selected genes by qPCR and immunohistochemistry. These data provide additional information about the cancer-relevant transcriptional regulation of RXR agonists and the diversity of activities of these compounds.
## 2.1. RXR Agonists Regulate Pathways Relevant in Breast Cancer
To characterize differential expression across the whole transcriptome, high-throughput techniques such as RNA sequencing (RNA-seq) allow us to parse differentially expressed genes into biological pathways for comprehensive analysis of RXR agonist response in tumors. For these studies, MMTV-neu mice (four per group) were fed control diet, MSU-42011 (100 mg/kg diet), or bexarotene (100 mg/kg diet) for 10 days. Tumors were harvested and RNA was analyzed by RNA-seq (Figure 1A). Relative to control tumors, tumors treated with both RXR agonists had higher expression of canonical immune pathways such as binding of antigen presenting cells and proliferation of immune cells, mononuclear leukocytes, and lymphocytes (Figure 1B). Causal network analysis [22], a means of identifying upstream regulators of differentially expressed genes from RNA-seq, identified SMAD4, IRF3, IRF7, and ZBTB10 as possible regulatory nodes.
## 2.2. Top Genes Differentially Expressed in Tumors Treated with MSU-42011 and Bexarotene Correlate with Patient Survival
Differential expression analysis revealed a list of genes (GSE211290) differentially expressed in control tumors vs. tumors from mice treated with MSU-42011 vs. tumors from mice treated with bexarotene. This list of 289 significantly (padj < 0.05) upregulated or significantly downregulated genes was sorted by adjusted p value. Of the top 10 most significant differentially expressed genes, high levels of expression of five genes correlate with improved overall survival in breast cancer patients—GRIA3 (logrank $$p \leq 3.1$$ × 10−7) (Figure 2A), CLEC10 (logrank $$p \leq 0.0035$$) (Figure 2B), FNDC1 (logrank $$p \leq 9.7$$ × 10−5) (Figure 2C), ISLR2 (logrank $$p \leq 4.8$$ × 10−5) (Figure 2D), and ITGA11 (logrank $$p \leq 2.4$$ × 10−6) [23] (Figure 2E). Survival curves were generated using the Kaplan–Meier Plotter (KmPlot) [24], without further stratification of breast cancer patients. *These* genes code for a glutamate receptor linked to migration and invasion (GRIA3) [25]; a c-type lectin with a role in cellular adhesion, signaling, and inflammation which serves as a dendritic cell marker (CLEC10) [26]; a fibronectin protein associated with invasion and chemoresistance (FNDC1) [27]; a member of the immunoglobulin superfamily which participates in nervous system development (ISLR2) [28]; and an alpha integrin which regulates adhesion to the extracellular matrix and the organization of collagen (ITGA11) [29].
## 2.3. RXR Agonists Regulate Cancer-Relevant Biological Pathways in MMTV-Neu Tumors
Enrichment analysis on control vs. MSU-42011 vs. bexarotene differential expression data using EnrichR reveals a set of pathways regulated by treatment with the various RXR agonists (Figure 3). The KEGG 2019 mouse database was used for these analyses; analysis using the Wikipathways 2019 mouse database is also shown (Supplemental Figure S1). Identified pathways include genes associated with ECM-receptor interaction, chemokine signaling, focal adhesion, PI3K-Akt signaling, complement and coagulation cascades, and the phagosome. Genes within these pathways encode for macromolecules involved in cellular structure and function, cellular behavior such as adhesion and migration, and downstream signaling pathways.
## 2.4. MSU-42011 and Bexarotene Induce Unique Gene Expression Profiles with Some Unifying Characteristics in Treated Tumors of a HER2+ Murine Model
Enrichment analysis was used to compare differentially expressed genes in control vs. MSU-42011 and control vs. bexarotene groups. Bar charts of these analyses reveal enrichment of shared pathways (focal adhesion, ECM-receptor interaction), as well as pathways unique to MSU-42011 (rheumatoid arthritis, ribosome) and pathways unique to bexarotene (PI3K-Akt signaling pathway, Rap1 signaling pathway) (Figure 4A,B). These unique pathways include genes which code for critical components related to cellular proliferation, immunity, and cellular migration and invasion. Scatterplot depictions of pathways regulated by MSU-42011 (Figure 4C) and by bexarotene (Figure 4D) highlight the similarities and differences in pathway enrichment within a particular cluster across different drug treatments. Volcano plot depictions of pathways regulated by MSU-42011 (Figure 4E) and bexarotene (Figure 4F) highlight the pathways unique to MSU-42011, especially the ribosome pathway. This pathway contains genes which encode for components necessary for rapid cellular turnover, which is particularly relevant to tumor biology [30,31]. KEGG 2019 was used as a database for these analyses.
## 2.5. MSU-42011 Increases Col6a3 and Map9 Expression in Mouse Mammary Tumors
*Several* genes were selected from the differential expression analysis for validation of mRNA expression by qPCR and protein levels by IHC. Collagen type VI a3 chain (COL6A3) is an extracellular matrix protein which is altered in several types of cancer [32]. Col6a3 mRNA expression (Figure 5A) is increased in tumors treated with MSU-42011 ($$p \leq 0.0425$$) but not in tumors treated with bexarotene. IHC (Figure 5B) demonstrates a $41\%$ increase in Col6a3 protein levels in tumors treated with MSU-42011 ($$p \leq 0.0096$$), and no apparent increase in Col6a3 in bexarotene-treated tumors (Supplemental Figure S2D). Kmplot was used to investigate the relevance of Col6a3 expression in human breast tumors (Figure 5C). High expression of COL6A3 is correlated with increased relapse-free survival ($$p \leq 0.031$$) in HER2+ breast cancer patients. qPCR (Figure 5D) also confirms a significant ($$p \leq 0.0026$$) increase in Map9 mRNA in MSU-42011-treated tumors, while there was no significant increase observed in bexarotene-treated tumors. MAP9 is a microtubule-associated protein which regulates cell cycle and the DNA damage response [33]. High expression of MAP9 is positively correlated with relapse-free survival (Figure 5E) in breast cancer patients ($$p \leq 0.0023$$).
## 2.6. MSU-42011 Increases IL-18 and H2-AA Expression in Mouse Mammary Tumors
As shown in Figure 4, the rheumatoid arthritis pathway is differentially regulated by MSU-42011 but not by bexarotene. *The* genes within this pathway include immune response genes which may contribute to the anti-tumor immunomodulatory activity of MSU-42011 [34]. The cytokine IL-18 was selected from the rheumatoid arthritis pathway for validation (Figure 6A). In tumors of mice treated with MSU-42011, but not bexarotene (Supplemental Figure S2A), mRNA expression of IL-18 increased ($$p \leq 0.0116$$). IHC (Figure 6B) revealed an increase in IL-18 in tumors treated with MSU-42011 ($$p \leq 0.04825$$).
Interestingly, tumors from the bexarotene group display an apparent paucity of IL-18, even in comparison to control tumors (Supplemental Figure S2B). Importantly, *Kmplot analysis* reveals that high IL-18 expression is correlated with increased relapse-free survival in breast cancer patients ($$p \leq 0.00022$$) (Figure 6C). MSU-42011-treated tumors also demonstrate a significant ($$p \leq 0.040822$$) upregulation of the gene coding for major histocompatibility complex (MHC) component H2-AA by qPCR (Figure 6D).
## 2.7. MSU-42011 Polarizes Bone Marrow-Derived Macrophages (BMDMs) towards an Anti-Tumor Phenotype
RXR agonists regulate pathways relevant to the function of the immune system, such as rheumatoid arthritis, complement and coagulation cascade, and cytokine–cytokine receptor interaction. To validate and further characterize the immunomodulatory activity of these compounds, BMDMs treated with RXR agonists were evaluated for expression of cancer-relevant genes within these pathways. Monocytes were harvested and differentiated with MCSF (20 ng/mL). On Day 5, BMDMs were treated with conditioned media from E18-14C-27 cells, derived from MMTV-Neu mammary tumors, to induce a tumor-educated macrophage phenotype. BMDMs were treated with conditioned media alone, or with 300 nM of either MSU-42011 or bexarotene. After 24 h, the relative proportion of F$\frac{4}{80}$+CD206+ macrophages was significantly ($$p \leq 0.02726$$) lower in BMDMs treated with conditioned media and 300 nM MSU-42011 compared to conditioned media alone (Figure 7A) In comparison, treatment with 300 nM bexarotene and conditioned media did not significantly alter the relative proportion of F$\frac{4}{80}$ + CD206+ BMDMs ($$p \leq 0.9423$$). Treatment with 300 nM of either RXR agonist significantly ($$p \leq 0.0016$$) decreased mRNA expression of IL-13, an immunosuppressive cytokine (Figure 7B). A trend of increasing TLR9 and IRF1 mRNA expression, associated with a pro-inflammatory, anti-tumor phenotype was observed in BMDMs treated with both RXR agonists (Figure 7C,D). RXR agonists also induce a significant ($$p \leq 0.00015$$) increase in expression of CCL6, a pro-inflammatory cytokine (Figure 7E).
## 3. Discussion
RXR agonists are a class of drugs with anti-tumor activity in preclinical models of breast and lung cancer [9,10,35]. While the known target of these drugs is the nuclear receptor RXR, different RXR agonists have markedly different effects on downstream gene expression. The nature of nuclear receptors—their ability to homodimerize or to heterodimerize with other nuclear receptors, the diversity of the structures of their ligands, and the vast number of target genes—makes RXR an interesting drug target. These characteristics likely differ among RXR agonists, potentially initiating heterodimerization with different nuclear receptor partners or recruiting different coactivators, leading to variations in resulting gene expression which may be clinically beneficial.
For the first time, using RNA-seq, we compared pathway activation and biological activity of the novel RXR agonist MSU-42011 and the FDA-approved bexarotene. The regulation of many similar pathways, including focal adhesion and extracellular matrix components, are shared by these two molecules (Figure 4). Immune-related pathways such as cytokine signaling pathways, complement activation, and genes related to phagosome activity are also shared by both MSU-42011 and bexarotene. Interestingly, validation of individual genes within these pathways shows that while one RXR agonist upregulates an immune- or ECM-related gene, the other RXR agonist does not. For example, MSU-42011 increases expression of Il-18 and Col6a3 at both the mRNA and protein level (Figure 5 and Figure 6), but neither of these two gene products are increased in tumors treated with bexarotene.
Several pathways were identified through enrichment analysis that were unique to a single RXR agonist. For example, the ribosome pathway and the fatty acid biosynthesis pathway, through which macromolecules critical to cellular function are synthesized, were prominent in enrichment analysis for MSU-42011 but not bexarotene. Conversely, the proteoglycans in cancer pathway, containing genes which code for matrix metalloproteinases (MMP), WNT signaling molecules, and growth factors such as IGF1 and FGF2, is prominent in bexarotene differential expression analysis but not MSU-42011.
The increase in Il-18 expression seen at both the level of mRNA and protein in tumors treated with MSU-42011, but not bexarotene, suggests that this RXR agonist promotes a pro-inflammatory tumor microenvironment, which can be harnessed for breast cancer treatment. IL-18 expression has been investigated as a possible prognostic indicator in breast cancer patients [36] and augments the cytotoxicity of NK cells [37]. Further investigation into the mechanism of MSU-42011 is necessary to determine if Il-18 is a critical mediator of anti-tumor immune response, and if it can be used as an indicator of response to therapy.
Furthermore, the increase in H2-Aa mRNA observed in tumors treated with MSU-42011 provides further evidence of its immune modulatory properties. H2-AA is an MHC class II component, higher expression of which is correlated with increased survival in ovarian cancer [38]. MHC II is responsible for antigen presentation to CD4+ T cells, which have recently gained recognition supporting the activation of cytotoxic T cells and mediating checkpoint inhibition response in cancer [39]. The MHC II pathway is necessary for antitumor immunity in several cancer types and is upregulated by treatment with histone deacetylase (HDAC) inhibitors [40,41]. In triple negative breast cancer, high expression of genes associated with the MHC II pathway correlates with progression-free survival [42]. Pharmacologic means of augmenting MHC II signaling may be a valuable therapeutic strategy for enhancing anti-tumor immunity. The increase in expression in Il-18 mRNA and protein and H2-aa mRNA observed in tumors treated with MSU-42011, but not bexarotene, may provide insight into the unique immunomodulatory properties of these two RXR agonists.
While COL6A3 expression has been explored as a prognostic biomarker in colorectal cancer [43], less is known about the role of COL6A3 in breast cancer. There is a trend of decreased COL6A3 expression with increasing tumor stage in breast cancer patients [32], which suggests a propensity for invasion and metastasis in these tumors [44]. Further, increased expression of COL6A3 in breast cancer after chemotherapy may predict for responsiveness to chemotherapy [45]. Finally, a cleavage fragment of COL6A3 known as endotrophin recruits macrophages through induction of monocyte chemoattractant protein-1 (MCP1) and increases IL-6 and TNFα in the tumor microenvironment [46]. Similarly, in obesity, collagen VI expression in omental white adipose tissue is correlated with expression of MCP-1, CD68, and CD86, providing further evidence that this collagen influences macrophage infiltration and phenotype [47]. As the role of COL6A3 is complex and can vary between cancer types and across tumor staging, the increase in expression of Col6a3 mRNA and protein in tumors treated with MSU-42011 and resultant effect on invasion and immunity merits further investigation.
The expression of the microtubule-associated protein MAP9 is altered in both colorectal cancer and breast cancer, leading to cell cycle dysregulation [33]. MAP9 hypermethylation in breast cancer leads to decreased expression and may have utility as an epigenetic biomarker [48]. Further, MAP9 transcription is induced upon DNA damage, and MAP9 protein interacts with and stabilizes p53 in Sa-OS-2 cells, leading to increased tumor suppressor activity [49]. As mRNA expression of Map9 is increased in tumors treated with MSU-42011, an exploration of the effects of MSU-42011 on cell cycle control and the ways this may be exploited for therapeutic purposes is warranted.
Based on our RNA sequencing data, particularly differentially expressed genes and pathways relating to immunity, we investigated the effects of MSU-42011 treatment on cell surface marker and gene expression in BMDMs (Figure 7). MSU-42011 decreased the relative proportion of F$\frac{4}{80}$ + CD206+ BMDMs by flow cytometry, indicating that treatment with MSU-42011 decreases immunosuppressive macrophages, while bexarotene did not have any effect. Further markers of immunosuppressive and pro-inflammatory macrophages were evaluated in BMDMs treated with RXR agonists by qPCR. MSU-42011 decreased expression of Il-13, an immunosuppressive cytokine, and increased expression of Ccl6, a pro-inflammatory cytokine. Furthermore, treatment with MSU-42011 increased expression of Tlr9 and Irf1, an interferon-regulatory factor known to be induced by ligation of TLR9. The TLR9-IRF1-IFN signaling axis has been implicated in macrophage polarization [50]. Taken together, these data provide additional evidence that MSU-42011 skews macrophages away from a tumor-promoting, immunosuppressive phenotype and toward an anti-tumor, proinflammatory phenotype. This effect on macrophages may be important for the anti-tumor activity of MSU-42011.
In conclusion, treatment with RXR agonists results in modulation of gene expression that are consistent with effective cancer treatments. As a drug class, RXR agonists display a broad range of activities, regulating different genes and biological pathways. The diversity of these compounds may allow them to be utilized for targeted or personalized cancer therapy.
## 4.1. Drugs
MSU-42011 was prepared as previously described [9,10,11]. Bexarotene was purchased from LC Laboratories (Woburn, MA, USA). For in vivo studies, RXR agonists were dissolved in a vehicle of 1 part ethanol: 3 parts highly purified coconut oil (Neobee oil, Thermo Fisher Scientific, Waltham, MA, USA). A total of 50 mL vehicle or drug dissolved in vehicle was mixed into 1 kg of powdered 5002 rodent chow (PMI Nutrition, St. Louis, MO, USA) using a stand mixer (KitchenAid, Benton Harbor, MI, USA).
## 4.2. In Vitro Experiments
Bone marrow-derived macrophages (BMDM) were isolated from femurs of adult C57BL/6 mice and differentiated using 20 ng/mL MCSF (Biolegend #576406, San Diego, CA, USA), as previously described [51]. Conditioned media was harvested from E18-14C-27 cells, derived from MMTV-Neu tumors, after 48 h of culture. BMDMs were treated using $75\%$ conditioned media supplemented with $25\%$ fresh media, with or without 300 nM RXR agonists for 24 h. IL-4 (10 ng/mL)(Biolegend #574304) was used as a positive control to induce a CD206+ immunosuppressive macrophage phenotype.
## 4.3. Flow Cytometry
BMDMs were harvested after 24 h treatment with conditioned media, with or without RXR agonists, filtered, and stained with fluorescent antibodies against F$\frac{4}{80}$ (APC, BM8, Biolegend) and CD206 (PE, MR6F3, Thermo Fisher Scientific). Live/dead green (Thermo Fisher Scientific) was used as a viability dye. Samples were run on BD Accuri C6 (BD Biosciences, San Jose, CA, USA.
## 4.4. In Vivo Experiments
MMTV-Neu mice [14] from our breeding colony (founders were purchased from Jackson Laboratory, Bar Harbor, ME, USA) were fed pelleted chow and palpated for tumors. Once tumors were detected, mice were switched to powder 5002 chow. Tumors were measured twice weekly with a caliper until 4 mm in diameter, at which time mice were randomized and fed control diet or 100 mg per kg per day diet of RXR agonist diet (~25 mg per kg per day body weight) for 10 days. Tumors were harvested and sections were either flash frozen for RNA-seq/qPCR or saved in neutral buffered formalin for immunohistochemistry.
## 4.5. RNA Sequencing
Frozen tumor sections (4 samples per treatment group) were weighed and homogenized. RNA was extracted using a RNeasy Mini Kit (Qiagen, Hilden, Germany), and the quality of the RNA confirmed with an Agilent Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA). RNA sequencing was completed by Novogene (Sacramento, CA, USA) as described previously [52]. Raw read counts were analyzed using the DESeq2 package in R (R for Windows v. 4.1.2; R Studio v. 1.4.1717) to generate differential expression profiles, and EnrichR and Ingenuity Pathway Analysis (Qiagen) were used for enrichment analysis. Raw and processed date were deposited in the Gene Expression Omnibus and are available through GSE211290.
## 4.6. qPCR
RNA harvested from frozen tumor sections was normalized across samples using Nanodrop (Thermo Fisher Scientific), and 500 ng of RNA was used to synthesize cDNA using a High Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Foster City, CA, USA). PCR was run on QuantStudio 7 Flex (Thermo Fisher Scientific) using SYBR green fluorescence. PCR data was analyzed using the delta-delta CT method using GAPDH as a housekeeping control. Error bars represent standard error of biological replicates, as indicated in figure legends. The following forward/reverse primers (Integrated DNA Technologies, Coralville, IA, USA) were used: IL-18, 5′-TCCTTGAAGTTGACGCAAGA-3′/5′-TCCAGCATCAGGACAAAGAA-3′, Col6a3, 5′ AAGGACCGTTTCCTGCTTGTT-3′/5′-GGTATGTGGGTTTCCGTTGAG-3′. Map9, 5′-GAAGAGTGCTACAGCCAACAC-3′/5′-ACAACAAGGTTTTTCCCCTTCC-3′, H2-AA, 5′-TCAGTCGCAGACGGTGTTTAT-3′/5′-GGGGGCTGGAATCTCAGGT-3′.
## 4.7. Immunohistochemistry
Formalin-fixed tissues were embedded in paraffin and sectioned by the Histology Core. Boiling citrate buffer was used for antigen retrieval, and endogenous peroxidase activity was quenched using hydrogen peroxide. Tissue sections were stained with antibodies against IL-18 (1 μg/mL, PA5-79481, Thermo Fisher Scientific), and Col6a3 (20 µg/mL, PA5-49914, Thermo Fisher Scientific), as described [34]. Sections were then labeled with biotinylated secondary antibodies (anti-rabbit, Cell Signaling Technology, Danvers, MA, USA; anti-rat, Vector Labs, Burlingame, CA, USA), as previously described. [ 34] A DAB substrate (Cell Signaling) was used for signal detection, as per manufacturer-provided protocols, and sections were counterstained with hematoxylin (Vector Labs). The Fiji ImageJ image processing package (version ImageJ2) was used for quantification of intensity of DAB staining by the color deconvolution method [53] and mean gray value was used to calculate optical density by the formula OD = log (max intensity/mean intensity, with a maximum intensity of 255 for 8–bit images [54].
## 4.8. KmPlot Generation
Survival curves were generated using Kaplan–Meier Plotter (https://kmplot.com/analysis/, accessed on 26 July 2022). This tool allows for correlation of gene expression to publicly available patient survival data [23]. KmPlot sources this patient data from GEO, EGA, and TCGA databases. The patient samples are split into two groups, high and low expression of the gene in question, using a robust autoselect algorithm to determine the most appropriate cutoff [24]. Breast cancer data was used, and overall or relapse free survival was compared.
## 4.9. Statistical Analysis
Results were expressed as the mean ± standard error. Data from tumor qPCR experiments were analyzed by one-tailed t test. $p \leq 0.05$ was considered statistically significant throughout all experiments. For RNAseq, differential expression analysis was performed using DESeq2. Outliers are detected by Cook’s distance and removed [55]. p values were adjusted to correct for multiple comparisons using the Benjamini and Hochberg method, and padj < 0.05 was considered statistically significant [56]. Data from in vitro experiments were analyzed using one-way ANOVA, and significant differences between groups were determined by the Tukey HSD multiple comparisons test. |
# Associations of COVID-19 Hospitalizations, ICU Admissions, and Mortality with Black and White Race and Their Mediation by Air Pollution and Other Risk Factors in the Louisiana Industrial Corridor, March 2020–August 2021
## Abstract
Louisiana ranks among the bottom five states for air pollution and mortality. Our objective was to investigate associations between race and Coronavirus Disease 2019 (COVID-19) hospitalizations, intensive care unit (ICU) admissions, and mortality over time and determine which air pollutants and other characteristics may mediate COVID-19-associated outcomes. In our cross-sectional study, we analyzed hospitalizations, ICU admissions, and mortality among positive SARS-CoV-2 cases within a healthcare system around the Louisiana Industrial Corridor over four waves of the pandemic from 1 March 2020 to 31 August 2021. Associations between race and each outcome were tested, and multiple mediation analysis was performed to test if other demographic, socioeconomic, or air pollution variables mediate the race–outcome relationships after adjusting for all available confounders. Race was associated with each outcome over the study duration and during most waves. Early in the pandemic, hospitalization, ICU admission, and mortality rates were greater among Black patients, but as the pandemic progressed, these rates became greater in White patients. However, Black patients were disproportionately represented in these measures. Our findings imply that air pollution might contribute to the disproportionate share of COVID-19 hospitalizations and mortality among Black residents in Louisiana.
## 1. Introduction
Coronavirus Disease 2019 (COVID-19) severity and mortality have been associated with several vulnerability factors, including comorbidities, environmental exposures, natural disasters, sociodemographic factors, and residence in congregate settings [1,2]. During the first wave of COVID-19 cases in the U.S., transmission in congregate settings was responsible for most disease spread [3], while comorbidities among older residents likely elevated risk of death [4]. The second wave of COVID-19 cases in the U.S. saw disproportionate numbers of severe disease and deaths among Black, Hispanic, Native American, and immigrant population groups [2,5,6]. The third wave may have occurred in part due to asymptomatic transmission in congregate settings including prisons and long-term care facilities, disproportionately impacting Black and Hispanic populations [2].
Soon after the start of the pandemic, some evidence emerged of an association between long-term average air pollution concentrations and the prevalence or severity of COVID-19. Notably, significant associations were observed for long-term average concentration of particulate matter (PM) having a diameter smaller than 2.5 μm (PM2.5) with SARS-CoV-2 infection prevalence [7,8,9], COVID-19 disease severity [10], intensive care unit (ICU) admission [11,12], ventilator use [12], and mortality [7,11,12,13]. Associations were also observed for long-term average diesel PM concentration estimates for COVID-19 prevalence and mortality [7]; average nitrogen dioxide (NO2) concentrations for prevalence [9,10,14], hospitalization [12], ICU admission [12], ventilator use [12], and mortality [12,14]; ozone (O3) concentration for mortality [12]; and hazardous air pollutant indices for respiratory and immunological hazard and mortality [15]. Chen et al. [ 12] also calculated associations with hospitalization, ICU admission, ventilator use, and mortality for 1-month average concentrations of PM2.5 and NO2. However, evidence was mixed, with some studies showing no association for NO2 [11], O3 [7,9,10,14], or PM2.5 [14,15]. Although many studies suggested a relationship between air pollutant concentration and COVID-19 outcomes, these studies primarily occurred early in the pandemic. Less is known about the association between air pollutant exposure and COVID-19 over time.
Strategies to respond effectively to public health emergencies such as the COVID-19 pandemic require understanding potential causal pathways for disease outcomes [16,17]. Mediation models can be useful to test how conditions present in populations may influence disease status either directly or indirectly. Disparities in COVID-19 outcomes by race combined with evidence about the relationship between COVID-19 and comorbidities, insurance status, and pollution exposure led to the hypothesis that there is a causal pathway between race and COVID-19 mediated by comorbidities, insurance status, and pollution exposure (Supplemental Figure S1).
Louisiana parishes routinely score well below the national average for quality of life, morbidity, and mortality indices such as low birthweight, child poverty, and median household income [18]. Based on the most recently available data, Louisiana ranks 46th among the states in air quality given by average daily PM2.5, 47th in percent smokers among adults, and 45th in the COVID-19 death rate. For the period of 1 March 2020–31 August 2021, $37.7\%$ of Louisiana’s COVID-19 deaths occurred in people identifying as non-Hispanic Black (hereafter referred to as “Black patients”) [19]. In 2020, $41.7\%$ of Louisiana’s COVID-19 deaths occurred among Black patients, compared with $31.2\%$ of Louisiana residents identifying as Black [20]. This is consistent with a recent analysis that connected disparities, systemic racism, economic stress, and COVID-19 mortality [21].
Given the disproportionate impact of COVID-19 on communities of color in Louisiana and the U.S., the goals of this research were to investigate the association of race and COVID-19 outcomes over time and to identify if exposures to air pollution and other characteristics, if any, may mediate associations of race with COVID-19 hospitalizations, ICU admissions, and mortality. We combined datasets from a Louisiana hospital system distributed across the Industrial Corridor and an air pollution database to include both individual and environmental level risk factors. We investigated factors including race, insurance status, comorbidity, and pollutant exposure for four waves of COVID-19 between 1 March 2020 and 31 August 2021.
## 2.1. Study Population and Health Data
In our cross-sectional study, we evaluated associations between race and COVID-19 hospitalizations, ICU admissions, and mortality and tested for factors that may mediate relationships. We used the Franciscan Missionaries of Our Lady (FMOL) Health System COVID-19 registry to identify patients at ten Louisiana locations distributed across the Industrial Corridor (Supplemental Table S1). The study was approved by the Louisiana State University Health Sciences Center-New Orleans Institutional Review Board (protocol #1986).
A total of 13,454 patients aged eighteen years or older who tested positive by a polymerase chain reaction (PCR) test for SARS-CoV-2 were identified using the Epic healthcare software between 1 March 2020 and 31 August 2021. This period is broken down by waves: 1 March–10 June 2020 (First Wave), 11 June–6 October 2020 (Second Wave), 7 October 2020–30 June 2021 (Third Wave), and 1 July–31 August 2021 (Fourth Wave). These were chosen to minimize both cases and mortality at the beginning and end of each period using the Johns Hopkins database for Louisiana [22].
Patient-level variables included hospital department, SARS-CoV-2 test date, SARS-CoV-2 test result, age, insurance status (private insurance, Medicaid, Medicare, and self-pay), self-reported race, self-reported ethnicity, sex, admission date, discharge date, length of hospital stay, admission status, ICU stay, ICU admission date, ICU discharge date, length of ICU stay, discharge dispatch, body mass index (BMI), presence of comorbidities, census tract, and census block group. Specific comorbidities were not listed consistently in the database, so they were simply recoded as presence [1] or absence [0] of any comorbidities for each patient in the database. To minimize bias in the patient database, negative PCR tests were not included in the database because tests were often obtained for non-medical reasons (e.g., work, travel, recreation, routine medical procedures).
Records were complete for hospitalization and ICU admission; records were missing for mortality for 171 Black patients and 128 White patients. Data with missing hospitalization, ICU, or mortality information were removed from the dataset. The final sample size was 11,331. Ethnicity data were missing for 9977 patients. A total of 113 patients (<$1\%$) responded that their ethnicity was “Hispanic or Latino/a”, “Mexican, Mexican American, or Chicano”, or “Other Hispanic, Latino/a, or Spanish origin”, while 1271 patients responded that they were “Not of Hispanic or Latino/a or Spanish Origin”. Therefore, ethnicity was not included in the statistical analyses.
## 2.2. Air Pollution Data
Air pollution burden calculations were based on Mikati et al. [ 23]. Absolute burden for each respiratory hazardous air pollutant was calculated by census tract as the weighted average of the emissions over the block groups within each tract. Facility-level air pollutant emissions data across the state of Louisiana were obtained from the 2017 National Emissions Inventory [24], and data for the census block groups and census tracts, including shape files and demographic characteristics, were obtained from the 2015–2019 American Community Survey [25]. Air pollutant emissions for each facility were assigned to a census block group when the block group’s centroid fell within a 2.5-mile radius of the facility. Air pollution burden was calculated as the sum of assigned facility-level emissions for each block group. Air pollution burden was then summed for each census tract. Air pollutants included PM2.5 and hazardous air pollutants (HAPs) known to have respiratory health effects: 1,3-dichloropropene, 2,4-toluene-diisocyanate, acetaldehyde, acrolein, acrylic acid, arsenic, beryllium, cadmium, chlorine, chloroprene, chromium, diesel PM, formaldehyde, hexamethylene-1,6-diisocyanate, hydrazine, hydrochloric acid, naphthalene, nickel, polycyclic organic matter (POM), propylene, and triethylamine. Oil and gas wells and refineries, which are prevalent naphthalene sources, and a neoprene plant, a chloroprene source, fall within the hospital service area (Supplemental Figure S2). Emissions burdens were assigned to 12,031 individual COVID-19 patients in the FMOL Health System database based on their census tract of residence. Bias minimization related to spatial assignment of emissions burdens is described in Mikati et al. [ 23].
## 2.3. Statistical Analysis
Differences in population characteristics, including air pollutant burden, were first illustrated using summary statistics. Direct relationships of race with other demographic variables (age, sex, BMI, presence of comorbidities, insurance status) or with disease-related variables (hospitalization, ICU admission, mortality) were screened via χ2 or ANOVA for categorical or continuous variables, respectively. Patient status was determined using hospital data for admission status, length of hospital stay, ICU status, and length of ICU stay. p-value < 0.05 for the χ2 or ANOVA test signified a potential significant difference between Black and White COVID-19 patients.
We used mediation analysis to test for environmental risk factors, called third variables, that might explain widely reported racial disparities in the COVID-19 outcomes. Mediation analysis is used here because it tests for causal associations from the explanatory variable (race) to third variables (environmental risk factors) and then to the outcome (COVID-19 hospitalization, ICU admission, or mortality) to determine if the pollutants are responsible for the association [26,27,28]. Potential mediators that intervene in the associations of race with COVID-19 outcomes (hospitalization, ICU admissions, mortality) were first evaluated. The variables included age, insurance status (private insurance, Medicaid, Medicare, and self-pay), ethnicity, sex, presence of comorbidities, and pollutant emissions. ANOVA or χ2 testing was performed to check the relationship between race and each variable, and between each variable and health outcomes. Potential mediators and potential covariates in the association between race and health effect were identified. Associations of each variable with both race and health effect indicated that the variable is a potential mediator. Variables associated with just health effects but not with race were identified as covariates to be controlled in the mediation analysis. Mediation analysis was then used to test if a portion of the race–outcome relationship could be accounted for by each intermediate variable after adjusting for all potential mediators, covariates, and confounders [26,27,28]. Significant mediators with the same sign as the total effect were considered as part of the racial differences explained by the mediator, while those with opposite sign suggested that the potential mediator caused greater uncertainty.
We used the R software v4.0.5 for data organization (packages dplyr, tidyr, bit65, and data.table) and for the merger of geographic data with air pollution emissions data and output of shape files containing emissions burdens (packages tigris, Hmisc, sp, and rgdal). The R package mma was used to perform the mediation analysis [29]. Confidence balls [30] were created to control the overall confidence level at $95\%$. We confirmed each of the criteria listed under the STrengthening the Reporting of OBservational Studies in Epidemiology checklist for cross-sectional studies during completion of this manuscript [31].
## 3. Results
Of the 11,331 patients in the final sample, 5708 ($50.4\%$) identified as non-Hispanic Black, and 5623 ($49.6\%$) identified as non-Hispanic White (Table 1). In comparison, $33.8\%$ of the population of *Louisiana census* tracts associated with patients’ residential addresses (referred to hereafter as the “patient population”) identified as non-Hispanic Black, and $58.8\%$ identified as non-Hispanic White. Census tract population data were available for $89\%$ of patients. A total of 6210 ($54.8\%$) cases identified as female, and 5119 ($45.2\%$) identified as male. On average, Black patients were 7.9 years younger than White patients. Black patients had a higher average BMI (p-value < 2 × 10−16), but average BMI for both groups was in the obese range (BMI > 30). Length of hospital and ICU stays were both significantly higher among White patients, although that difference diminished for Medicare recipients and those without insurance. More Black patients had Medicaid ($61.9\%$) or were uninsured ($61.6\%$), while more White patients had private insurance ($62.5\%$) or Medicare ($59.4\%$). Among the twenty-two pollutants tested, emissions burden was statistically significantly higher for Black patients in seventeen compounds and for White patients in three compounds, with no significant difference for two pollutants, hydrazine and propylene.
For the study duration, hospitalizations were significantly higher among White patients ($53.4\%$), while ICU admissions were significantly higher among Black patients ($52.4\%$). Table 2 provides the frequency of hospital and ICU admissions and deaths for the full study period and for each wave of the study. Equitable Black and equitable White indicate the ratio of the share of the population of patients in each group compared with the number of patients that would be expected for each group based on the proportion of each group in the *Louisiana census* tracts sending patients to the FMOL Health System. Compared with their share of the patient population, Black patients were over-represented among hospitalizations by $28\%$, among ICU admissions by $43\%$, and among total COVID-19 patients by $38\%$ (Table 2). Hospital and ICU admissions significantly exceeded the share of the population for Black patients by $86\%$ and $89\%$, respectively, during the first wave and by $40\%$ and $56\%$, respectively, during the second wave. By the third wave, the proportions of hospital and ICU admissions were higher among White patients with a significant χ2, but the proportion of hospital and ICU admissions among Black patients were $16\%$ and $36\%$ greater, respectively, than the share of the population identifying as Black.
Information regarding mortality (patients who expired while at the hospital or within 7 days of discharge) was available for 11,032 ($97.3\%$) cases (Table 2). For the study duration, the proportion of those who died was significantly higher for White patients, but the proportion of Black patients who died was still $25\%$ greater than the proportion of Black people in the *Louisiana census* tracts sending patients to the FMOL Health System. The proportion of patients who died was nearly $65\%$ for Black patients during the first wave, with the share of the patient population that is Black over-represented by $78\%$, but was significantly higher for White patients during the second and third waves and not significantly different in the fourth wave. During the second wave, mortality among Black patients was still $28\%$ higher than the share of patient population identifying as Black.
The mediation analysis figures (Figure 1, Figure 2 and Figure 3 and Figures S3–S14) illustrate the relative relationships between effect estimates for Black and White patients and how much the health effect (hospital admissions, ICU admissions, or mortality) can be explained by other factors. Based on the coding (1 = White, 2 = Black), a positive total effect suggests a larger effect in Black patients compared with White patients, and a negative total effect suggests a larger effect in White patients compared with Black patients. The direct effect illustrates how much of the health effect with respect to race can be explained only by race. The other effects show how much the health effect with respect to race can be explained by other factors, such as age, sex, comorbidity, or air pollution. For each factor, an effect that is the same sign as the total effect with a confidence interval that does not include zero suggests that the specific factor can explain some of the race–health effect relationship. An effect with a sign that is different from the total effect and/or large confidence intervals can suggest large uncertainty in the total effect or may indicate that a direct effect or mediated effect may partially explain effect on a different race than is represented in the total effect.
Age and, with a smaller contribution, presence of comorbidities were significant mediators of the race–hospitalization relationship (Figure 1) for the entire study period. The negative sign of the total effect and direct effect indicated greater hospital admissions among White patients, with age and comorbidities as significant mediators for each wave. Naphthalene and arsenic were significant mediators of the total effect for the duration of the study. Naphthalene was not a significant mediator for any of the individual waves, and arsenic was only for the fourth wave. PM2.5 and chromium exposures may have increased the effect among Black patients. However, these exposures may have added uncertainty to the race–hospitalizations total effect because the different sign of these mediation coefficients widened the confidence intervals around the total effect.
The model for race–ICU admission for the entire study period (Figure 2) included a direct effect that was larger than and opposite in sign to total effect, widening the confidence interval around total effect to suggest uncertainty. The direct effect of different sign may suggest that mediating factors, such as age, comorbidity, sex, and exposure to chloroprene, naphthalene, and propylene dichloride, may contribute to a greater total effect in White patients but that Black patients may be more likely to experience COVID-19 ICU admissions in the absence of the mediating factors. PM2.5 and chromium emissions burden potentially contribute to a greater effect in Black patients but widened the confidence intervals around total effect. Age was a mediator of the race–ICU admission effect during each wave. During the third wave, the total effect between race and ICU admission was near zero, but there was a greater direct effect on Black patients and greater indirect effect of PM2.5 emissions on Black patients balanced by greater indirect effects of age, cadmium emissions, and nickel emissions on White patients. The fourth wave produced a large total effect for the race–ICU admission model that included a direct effect comprising more than half of the total effect and indirect effects from age, insurance status, sex, and emissions of POM.
The mediation analysis results indicate that for the total duration and for each wave, there was a greater total effect in White patients, with age consistently a significant mediator of the total effect of race on mortality (Figure 3). The direct effect of different sign may suggest that being of Black race predicts a greater race-based mortality effect in COVID-19 patients, and the greater total mortality effect in White patients may have been driven by mediating factors. Sex and comorbidities had smaller indirect effects for the entire study period but were still significant. Naphthalene was identified as a mediator of the total effect, contributing to a greater effect in White patients for the total duration, while hydrochloric acid added uncertainty to the assessment of mediation. Hydrochloric acid burden may have contributed to the effect in Black patients. Naphthalene was identified as a potential mediator during the first wave but was not significant and added uncertainty to that model. POM was a significant mediator of the race–mortality relationship during the fourth wave. POM emerged as a potential mediator in the total duration model but was of small magnitude.
## 4. Discussion
A complicated picture of racial disparities in COVID-19 hospitalization, ICU admission, and death emerges from these results. For the entire study period, hospitalization and mortality rates among those who were diagnosed with COVID-19 in Louisiana’s Industrial Corridor were greater for White patients than for Black patients, while ICU admission rates were higher for Black patients. These proportions shifted towards White patients by late 2020. However, the proportion of those diagnosed with COVID-19 as well as those hospitalized, admitted to the ICU, and who died remained disproportionately higher for Black patients compared with the patients’ residential areas, despite the 7.9-year age difference between Black and White patients. For example, across the entire study period, COVID-19 mortality among Black patients was $25\%$ greater than what would be anticipated based on the proportion of the patient population identifying as Black, while COVID-19 mortality among White patients was $14\%$ below what would be anticipated based on the patient population identifying as White.
Among the population of those who had to be hospitalized due to COVID-19, most of the association of race could be explained by mediators, i.e., third variables. Age was the strongest mediator, accounting for the largest share of the association between race and COVID-19 hospitalization. In each wave, the average age of Black patients was 8–9 years younger than the average age of White patients. In fact, life expectancy for Black Louisiana residents is 3.4 years shorter than for White Louisiana residents [32]. These factors make it difficult to disentangle the effect of race from the effect of age. Cronin and Evans [33] calculated the U.S. COVID-19 mortality rate throughout 2020 by race-ethnicity and age and found higher mortality for Black males and females for every age group (0–44 y, 45–64 y, 65–74 y, and 75+ y) with a greater effect of age than race or sex.
Findings that naphthalene and chloroprene explained part of the associations between White race and ICU admissions and that naphthalene also explained part of the associations of White race with hospital admissions and mortality were surprising given that their burdens among Black patients in Louisiana were 8.9 and 4.5 times higher, respectively, than for White patients. Chlorine was found to explain ICU admissions among Black patients, and hydrochloric acid was found to explain mortality among Black patients. These findings are consistent with chlorine’s burden being 17 times greater and hydrochloric acid’s burden being 8.0 times greater among Black patients than White patients. Terrell and James [15] noted higher COVID-19 incidence in locations with a higher respiratory hazard index, where the index was computed by the U.S. EPA based on HAPs emissions. PM2.5 explained ICU admissions and mortality among Black patients and was 5.2 times greater among Black patients compared with White patients. Several studies [7,11,12,13] found associations of PM2.5 with COVID-19 using data from the first few months of the pandemic, but they either used a nationwide domain or studied different parts of the country. Sidell et al. [ 9] studied how the relationship between air pollution and COVID-19 infection changed in a southern California cohort over four waves spanning 1 March 2020 through 28 February 2021. They observed associations to persist for each wave and the entire duration of their study for both 1-month average and 1-year average PM2.5 and NO2 concentrations and between COVID-19 infection and 1-year average O3 concentrations for the second, third, and fourth waves and entire study duration. However, the magnitude of the associations declined over the third and fourth waves, especially for PM2.5. Uncertainties persist about the influence of air pollution on COVID-19 outcomes over the course of the pandemic. Terrell and James [15] calculated a correlation of 0.21 for PM2.5 concentration with COVID-19 mortality for Louisiana, and Xu et al. [ 34] noted for a study of COVID-19 in Texas that PM2.5 concentrations were not associated with COVID-19 mortality.
There were some limitations specific to this dataset. These analyses reflect the data and results of the full population that interfaced with the FMOL Health System based primarily in the Industrial Corridor. This selective population was not representative of all Louisiana COVID-19 hospitalizations and thus limits some generalizability of our results for the full state. The most recent HAP emission data were from 2017. Additionally, vaccination status was not included in the dataset but could have affected severe outcomes during the last two waves.
Mediation analysis showed a clear relationship between race and outcome at the beginning of the pandemic, but race appeared less influential over time. Mediation analyses highlighted the uncertainty in the race–outcome relationships across waves. Although several air pollutants were associated with race, with higher emissions burdens among predominantly *Black census* tracts, air pollution did not appear to consistently mediate the total race–outcome relationship for most waves. Uncertainties in the mediation analyses raise questions about unmeasured confounding. VanderWeele [35] asserted four necessary assumptions for mediation analysis: [1] control for confounding of the exposure–outcome relationship, [2] control for confounding of the mediator–outcome relationship, [3] control for confounding of the exposure–mediator relationship, and [4] no confounder of the mediator–outcome relationship is affected by the exposure. The first three were accomplished through the process of checking for significant associations among the exposure, potential mediator, and outcome. However, the final assumption is more difficult to enforce for this study given that long-standing racialization may introduce other, uncontrolled factors [36]. Similarly, it is difficult to ascertain whether any mediators were omitted from the analysis. Additionally, exposure measurement error or exposure misclassification has the potential to weaken the associations between the exposure and mediators. In the case of the HAP burdens, Mikati et al. [ 23] sought to control this by testing different assignment radii and found little difference. Use of census tract-level assignments also helps to localize the exposure estimates.
## 5. Conclusions
The wave-by-wave results of this study indicate that the role of race in the associations of COVID-19 outcomes has evolved over the course of the pandemic in Louisiana. Early in the pandemic, the association of race with hospitalization, ICU admission, and mortality appeared to be mediated by age. However, the younger age profile of Black COVID-19 patients contradicts findings of enhanced risk to older patients [33], suggesting that race rather than age played a role, especially early in the pandemic. As time went on, the analysis revealed greater impact on White patients in terms of overall numbers, but still with a disproportionate impact on Black patients compared with the local population. These findings reveal a need for strategies that focus on disadvantaged communities and individuals to protect each population group from exposure to the SARS-CoV-2 virus and from the severe impacts of COVID-19. Our findings also highlight a need to disentangle the associations of COVID-19 outcomes with race as a marker for measures of disadvantage and social determinants of health.
Burden from air pollutants may have explained some of the race–outcome associations. Findings that greater effect of chlorine and PM2.5 in Black patients on ICU admissions and greater effect of hydrochloric acid in Black patients on mortality were not surprising because their burdens among Black patients were 17, 5.2, and 8.0 times higher, respectively, than among White patients. Our results suggest that disparities in environmental conditions may have exacerbated inequities in COVID-19 impacts among Black patients. |
# Can Adipose Tissue Influence the Evaluation of Thermographic Images in Adolescents?
## Abstract
Infrared thermography (IRT) is a technology easy to use for clinical purposes as a pre-diagnostic tool for many health conditions. However, the analysis process of a thermographic image needs to be meticulous to make an appropriate decision. The adipose tissue is considered a potential influence factor in the skin temperature (Tsk) values obtained by IRT. This study aimed to verify the influence of body fat percentage (%BF) on Tsk measured by IRT in male adolescents. A total of 100 adolescents (16.79 ± 0.97 years old and body mass index of 18.41 ± 2.32 kg/m²) was divided into two groups through the results of a dual-energy X-ray absorptiometry analysis: obese ($$n = 50$$, %BF 30.21 ± 3.79) and non-obese ($$n = 50$$, %BF 11.33 ± 3.08). Thermograms were obtained by a FLIR T420 infrared camera and analyzed by ThermoHuman® software version 2.12, subdividing the body into seven regions of interest (ROI). The results showed that obese adolescents presented lower mean Tsk values than the non-obese for all ROIs ($p \leq 0.05$), with emphasis on the global Tsk (0.91 °C) and anterior (1.28 °C) and posterior trunk (1.18 °C), with “very large” effect size values. A negative correlation was observed in all the ROI ($p \leq 0.01$), mainly in the anterior (r = −0.71, $p \leq 0.001$) and posterior trunk (r = −0.65, $p \leq 0.001$). Tables of thermal normality were proposed for different ROIs according to the classification of obesity. In conclusion, the %BF affects the registered Tsk values in male Brazilian adolescents assessed by IRT.
## 1. Introduction
Skin blood flow has been studied for many years, especially for its important role in human thermoregulation. The physiology and vascular anatomy of the skin create a typical pattern of temperature distribution, which must remain within a certain distribution range to be considered healthy. When temperature values deviate from this standard considered ideal, this can be a sign of some kind of illness.
Infrared thermography (IRT) is a non-invasive, radiation-free, and easy-to-apply technology particularly suitable to precisely map the skin temperature (Tsk) through the analysis of thermographic images, which is frequently used for clinical purposes as an auxiliary tool in the process of diagnosis of diseases [1,2,3] and the prevention and rehabilitation of injuries [4,5,6,7]. The procedure is performed using a thermographic camera with a sensor responsible for capturing the heat radiated from the skin’s surface and transforming it into a temperature scale. The camera sensor is positioned close to the evaluated and provides a real-time representation of the Tsk distribution pattern in high resolution.
To obtain a quality thermographic image, the acquisition process must follow specific guidelines, such as the suggested by Moreira et al. [ 8], and observe several factors that may influence image evaluation. This is suggested in the review written by Fernández-Cuevas et al. [ 9], which presents studies that indicate that technical, environmental, and individual internal and external factors can influence the analysis by IRT, which is relevant for medical diagnosis purposes or for understanding human thermoregulation processes.
One of the two main factors to be observed, including a group of internal factors, is related to body composition [9]. Body fat has a lower level of thermal conductivity than other tissues involved in the thermoregulation process [10], acting as a “body thermal insulator” [11] since it acts as thermal resistance, making the process of heat conduction of the body more difficult to the internal region of the body compared to peripheral regions (e.g., skin) by $40\%$ to $50\%$ [10] and being able to influence the Tsk of the area where it is more concentrated [12]. Adipose tissue has lower thermal conductivity values than muscle tissue [13], dermis [13], and epidermis [14]. Furthermore, obesity is associated with increased inflammatory cytokines TNF-a or IL-6 to perivascular adipose tissue around healthy blood vessels, which free radical scavengers or cytokine antagonists can block, directly affecting the mechanisms of skin vasodilation and vasoconstriction [15,16].
Some studies have investigated whether the amount of body fat can interfere with the Tsk assessed by IRT in the population of men [17,18,19,20] and women [17,18,21,22,23] and, in general, observed that individuals with a more significant amount of fat presented lower Tsk values in body regions of interest (ROI) such as the trunk, arms, and legs. This factor should be considered during the evaluation of thermal images for a more precise assessment of the results.
Given the need for more precise knowledge on this subject, since the few existing studies are restricted to the adult population [17,18,19,20,21,22,23], and given that the use of IRT is more frequently used in clinical settings, investigating the influence of this characteristic on other age groups seems crucial for increasing the thermal image evaluation capacity of professionals working with IRT.
Thus, the objective of this study was to verify the influence of body fat on the Tsk values of male adolescents and to provide tables of thermal normality that help in the process of evaluating thermographic images and subsequent diagnosis of possible diseases or sports injuries or in helping the physical rehabilitation process. It is hypothesized that %BF will present a negative correlation with Tsk values and that participants with higher amounts of body fat will have lower Tsk values pattern in the regions of the trunk, arms, and lower limbs.
## 2.1. Participants
After evaluating 216 male high school students from public and private schools in a city in the interior of Brazil, we included 100 participants in the study. This amount was based on the total number of participants considered obese after the initial assessment. Thus, we intentionally selected the 50 individuals considered obese (16.83 ± 0.93 years, 78.94 ± 10.08 kg, 1.76 ± 0.07 m height, and a body mass index of 25.63 ± 2.96 kg /m2), and to perform a statistical evaluation with the same number of non-obese participants, we randomly selected, among the remaining 166 evaluated, 50 non-obese individuals (16.75 ± 1.01 years, 56.49 ± 8.51 kg, 1.75 ± 0.07 m height, and a body mass index of 18.46 ± 2.50 kg/m2). The final characteristics of the sample were 16.79 ± 0.97 years, 67.71 ± 14.61 kg of body weight, 1.75 ± 0.07 m height, and a body mass index of 18.41 ± 2.32 kg/m2. As a characterization criterion for individuals with or without obesity, we used the classification proposed by Williams et al. [ 24] specifically for teenagers. The randomization process of the 166 evaluations was carried out using the website https://www.randomizer.org/ (accessed on 19 December 2022).
As inclusion criteria, we selected male individuals who were apparently healthy, without apparent motor or intellectual deficiency, and aged between 14 and 19. Those excluded from the research were those without a signed informed consent or presenting some of the following exclusion criteria: smoking; history of kidney problems, musculoskeletal injury in the last two months, skin burns, or symptoms of pain in some body region; or sleep disturbances or fever over the previous seven days, physiotherapy or dermatological treatments with creams in the last two days, ointments or lotions for local use in the last two days, consumption of medication affecting Tsk (i.e., anti-inflammatory, antipyretic, or diuretics), or any dietary supplement with potential interference with water homeostasis or body temperature in the last two weeks. In addition, participants could not perform resistance training.
The study was approved according to ethical criteria for research involving human beings by the Ethics Committee of the local Institution under the registration number CAAE 40934275729. After explaining the characteristics and study objective, all the participants (or their person in charge in case of been under 18 years old) voluntarily signed the written consent before participating in the study.
## 2.2.1. Anthropometric Assessment of the Body Fat Percentage (%BF)
All the anthropometric variables were collected by trained professionals with level II certification from the International Society for the Advancement of Kinanthropometry (ISAK) [25]. Initially, height was measured using a portable stadiometer (Cescorf®, Porto Alegre, Brazil) with a precision of 1 mm and body mass with a digital balance (Welmy w $\frac{200}{5}$, Brazil) with a precision of 0.1 kg. The %BF was determined by dual-energy X-ray absorptiometry (DXA) by a single technician duly qualified for this function, using a GE Healthcare® densitometer, Lunar Prodigy Advance DXA System (software version: 13.31), which provides the values of total and segmented fatness (i.e., trunk, arms, and lower limbs). The equipment was calibrated daily according to the manufacturer’s specifications to guarantee the quality of the measurements.
## 2.2.2. Thermography Assessment
The thermographic image collection protocol was carried out following what was established by Moreira et al. [ 8], carefully observing all the factors that need to be considered to obtain a quality image.
Four thermographic images from the upper and lower body (see Figure 1), in the anterior and posterior positions, were registered from each subject using a T420 infrared camera (FLIR®, Stockholm, Sweden) located perpendicularly to the center of the recorded body areas. The imager had an accuracy of $2\%$, a spectral band of 7.5–13 µm, 60 Hz rate, automatic focus, and a resolution of 320 × 240 pixels and could detect temperature variations ≤ 0.05 °C. It was connected at least 30 min before all the evaluations to allow the stabilization of its thermal sensor, setting the emissivity at 0.98. During data collection, ambient temperature (21.3 ± 0.7 °C) and humidity (55.3 ± $2.2\%$) were controlled according to specific recommendations for this type of evaluation [8,9] and monitored through a portable meteorological station (Instrutherm®, THAL-300, São Paulo, Brazil). After stabilizing the temperature and humidity values in the room, the subjects remained standing, wore only slippers and shorts, and avoided any contact with surfaces or scratches for 10 min [26] before the thermographic images were captured. All the thermograms were obtained in the morning to reduce the influence of circadian rhythm on the results [27,28]. The thermal imager was positioned perpendicular to the ground [8] and at a distance allowing the subject to fit into the avatar generated by the software used for analysis so that all ROIs could be satisfactorily evaluated, as shown in Figure 1. After 10 min, following the methodology of Yasuoka et al. [ 29], they were asked to report the thermal sensation (TS) on a 9-point scale (+4, very hot; +3, hot; +2, warm; +1, slightly warm; 0, neutral; −1, slightly cool; −2, cool; −3, cold; −4 very cold) and the comfort sensation (CS) on a 7-point scale (+3, very comfortable; +2, comfortable; +1, slightly comfortable; 0, neutral; −1, slightly uncomfortable; −2, uncomfortable; −3, very uncomfortable).
The thermograms were automatically analyzed with ThermoHuman® software version 2.12 (PEMA THERMO GROUP S.L., Madrid, Spain), a validated system [30,31] that has been used in other studies with human population [32,33,34]. The software provides mean Tsk and standard deviation values and the number of pixels, which are automatically quantified in 48 ROI for the upper body and 36 ROI for the lower body. Those initial values were integrated, considering the average Tsk values and the corresponding number of pixels of each ROI, into seven groups (see Figure 1): Whole body (TskGlobal): considering the 84 ROIs; trunk: considering 10 ROIs from the anterior view (TskTrunkANT) and 10 ROIs from the posterior view (TskTrunkPOST); arms: considering 12 ROIs of both arms from the anterior view (TskArmsANT) and 12 ROIs from the posterior view (TskArmsPOST); and legs: considering 16 ROIs of both lower limbs from the anterior view (TskLegsANT) and 16 ROIs from the posterior view (TskLegsPOST). The ROIs were integrated with the use of the equation: Tskintegrated = (TskROI1 × npixROI1 + TskROI2 × npixROI2 + …+ TskROIn × npixROIn)/(npixROI1 + npixROI2 + … + npixROIn), where “n” is the number of ROI to be integrated, and “npix” is number of pixels included in the ROI. The data of the head, hands, gluteus, hips, and feet were excluded from the analysis.
## 2.2.3. Statistical Analysis
The Kolmogorov–Smirnov test was applied to confirm the normality of the dependent variables. As the normality was confirmed, the results are presented as average, minimum, and maximum values and their standard deviations. A Student’s t-test for independent samples was run to verify whether TS, CS, and Tsk differed between groups (obese and non-obese). Moreover, Cohen’s test was used to assess the effect size, which was interpreted following the scale proposed by Sawilowsky [35], which classifies the values of d as very small (0.01), small (0.2), medium (0.5), large (0.8), very large (1.2), and huge (2.0). The correlation between these variables was analyzed using the Pearson correlation test.
Furthermore, we elaborated a normative table to establish the thermal profile of the adolescents based on the %BF for each ROI analyzed. For this, we used the percentiles (P) as a reference to classify if an ROI was “strongly hypo-radiant” ($P \leq 5$), “hypo-radiant” ($P \leq 25$), in “thermal normality state” ($$P \leq 50$$), “hyper-radiant” ($P \leq 75$), or “strongly hyper-radiant” ($P \leq 95$). The choice of terms for characterizing the ROI was based on other studies [36,37].
The statistical analyzes were carried out by statistical software (SPSS, version 22.0), with a significance level of $5\%$.
## 3. Results
Table 1 presents the data on the quantity of fatness of the two participant groups ($$n = 100$$) based on their classification of obesity.
No differences were observed ($p \leq 0.05$ and $95\%$ CI = −$\frac{0.122}{0.482}$) in the values reported for TS and CS by obese (TS = 1.01 ± 0.40 and CS = 1.53 ± 0.58) and non-obese (TS = 0.83 ± 1.00 and CS = 1.27 ± 0.95) individuals in the thermographic collection environment.
Table 2 presents the results obtained by the thermographic evaluation of the two participant groups ($$n = 100$$) and their respective means, standard deviation, and minimum and maximum values as well as a comparison between the values observed in the participants with and without obesity. The main Tsk differences were observed for the TskGlobal (0.91 °C), TskTrunkANT (1.28 °C), and TskTrunkPOST (1.18 °C), being lower in obese individuals with “very large” effect size values.
This pattern of negative variation observed between the Tsk values of obese and non-obese adolescents was also verified in the correlation between the variables. We found a negative relationship between %BFglobal and Tskglobal (r = −0.57, $p \leq 0.001$), between the %BFTrunk and TskTrunkANT (r = −0.71, $p \leq 0.001$) and TskTrunkPOST (r = −0.65, $p \leq 0.001$), %BFArms and TskArmsANT (r = −0.29, $p \leq 0.01$) and TskArmsPOST (r = −0.36, $p \leq 0.001$), and %BFLegs and TskLegsANT (r = −0.45, $p \leq 0.001$) and TskLegsPOST (r = −0.44, $p \leq 0.001$), with emphasis on the values observed in the trunk region, as illustrated in Figure 2.
Based on the results obtained, Table 3 suggests breakpoint values to classify the person (both obese or non-obese) according to their level of infrared radiation as “strongly hypo-radiant” ($P \leq 5$), “hypo-radiant” ($P \leq 25$), “in thermal normality state” ($$P \leq 50$$), “hyper-radiant” ($P \leq 75$), or “strongly hyper-radiant” ($P \leq 95$) on all the considered integrated ROIs.
## 4. Discussion
The main results observed in this study suggest that the Tsk of individuals considered obese is lower than those without obesity (Table 2). Among the results, we highlight the effect size values observed in the evaluations of the TskGLOBAL, TskTrunkANT, and TskTrunkPOST, which presented “d” values of 1.23, 1.64, and 1.57, respectively, representing a probability of $80.8\%$, $87.6\%$, and $86.7\%$ for an obese adolescent presenting lower Tsk values than a non-obese adolescent for these ROIs. Additionally, Tsk values are inversely related to %BF for all ROIs analyzed in the study, highlighting the results observed between %BFglobal and Tskglobal (r = −0.57, $p \leq 0.001$), %BFTrunk and TskTrunkANT (r = −0.71, $p \leq 0.001$), and %BFTrunk and TskTrunkPOST (r = −0.65, $p \leq 0.001$). These data make it possible to affirm that this parameter should be considered in studies evaluating Tsk by IRT once the range of thermal normality varies according to the obesity classification of the evaluated patient. For this reason, we propose tables for the characterization of thermal normality to minimize any error in evaluation of the thermal images according to the classification of obesity for male adolescents.
The influence of %BF on Tsk values assessed by IRT has already been verified in other studies with the adult population based on different analysis models and presenting similar results to the present study. Chudecka et al. [ 22] and Chudecka and Lubkowska [23] used the bioimpedance technique and manual marking of ROIs to assess the impact of %BF on Tsk in adult women. Chudecka et al. [ 22] compared 20 obese women (23.2 ± 1.57 years, 90.7 ± 5.12 kg, 167.2 ± 3.75 cm height, and 37.8 ± 2.25 %BF) with 20 non-obese women (22.4 ± 1.22 years, 60.4 ± 2.56 kg, 169.0 ± 2.68 cm height, and 25.7 ± 2.44 %BF), verifying that women with obesity presented lower values ($p \leq 0.05$) of Tsk in the anterior and posterior regions of the arms, thighs and calves, the abdomen, and lower portion of ribs. In addition, they presented a negative correlation with %BF on the anterior (r = −0.77, $$p \leq 0.001$$) and posterior (r = −0.63, $$p \leq 0.008$$) regions of the thigh and abdomen (r = −0.88, $$p \leq 0.000$$). The body fat of the abdomen region was also negatively correlated (r = −0.59, $$p \leq 0.052$$) with Tsk in the study by Chudecka and Lubkowska [23], who compared 15 women with anorexia nervosa (18–24 years, 44.9 ± 4.49 kg, 169.90 ± 6.16 cm of height, and 13.30 ± 1.43 %BF) with 100 apparently healthy women (21–23 years old, 62.0 ± 4.84 kg, 168.8 ± 6.12 cm of height, and 22.8 ± 3.77 %BF). In both situations, the women stayed 20 min at a room temperature of 25.0°C and $60\%$ relative humidity before imaging.
Neves et al. [ 17] and Salamunes et al. [ 21] used DXA for analyzing body composition of an adult population including both men and women, and the impact of %BF on the observed Tsk values also presented results equivalent to those of the present study. In the study by Neves et al. [ 17] that evaluated the Tsk in 47 men and 47 women aged between 18 and 28 years, after 15 min at a room temperature of 23.0 ± 1 °C (no mention of humidity), they observed that the highest value of %BF was negatively correlated with the average Tsk of the anterior (r =−0.76, $p \leq 0.05$) and posterior trunk (r = −0.69, $p \leq 0.05$), anterior (r = −0.57, $p \leq 0.05$) and posterior lower limbs (r = −0.63, $p \leq 0.05$), and anterior (r = −0.42, $p \leq 0.05$) and posterior arms (r = −0.47, $p \leq 0.05$) in males and also negatively correlated with the anterior (r = −0.27, $p \leq 0.05$) and posterior trunk (r = −0.47, $p \leq 0.05$), anterior (r = −0.36, $p \leq 0.05$) and posterior lower limbs (r = −0.40, $p \leq 0.05$), and anterior (r = −0.30, $p \leq 0.05$) and posterior arms (r = −0.21, $p \leq 0.05$) in women [18]. This negative correlation in women was also reported by Salamunes et al. [ 21], who evaluated 123 women aged between 18–35 years after 15 min at a room temperature of 21.0 °C (no mention of humidity), observing this behavior in the anterior and posterior regions of the trunk (r = −0.33 and r = −0.36, $$p \leq 0.000$$, respectively), anterior and posterior arms (r = −0.40 and r = −0.43, $$p \leq 0.000$$, respectively), and anterior and posterior lower limbs (r = −0.38 and r = −0.49, $$p \leq 0.000$$, respectively).
The results in the present study, corroborated by those who observed the same Tsk pattern and its relation with the %BF, clearly demonstrate that the adipose tissue influences the Tsk values, probably due to its low thermal conductivity [10,11]. Thus, taking body fat into account is important when analyzing thermographic images. For this reason, we present values for the characterization of thermal normality according to the subject’s obesity classification (Table 3). We propose the points of thermal normality ($$P \leq 50$$) and cutoff points of $P \leq 25$ for low radiating and $P \leq 75$ for high radiating ROIs and cutoff points of ($P \leq 5$) for “very low” and (>95) for “very high” radiating ROIs. This proposal is very innovative, and it has not been conducted by other studies that evaluated the thermal profile in adults [18,38,39,40,41,42] or that observed differences between Tsk values as a function of body composition [17,21,22,23] or anthropometric indexes [22,23].
To the best of our knowledge, this is the first study that evaluated the impact of %BF on Tsk values in adolescents using DXA to estimate body composition and presents a different analysis methodology from previous studies, proposing a table of thermal normality. While previous studies used manual marking methods for ROIs selection, this study used software with automatic selection that has already been used in other studies for thermographic evaluation [5,43]. This characteristic can reduce individual error and promote greater reliability of the data obtained.
Our results can contribute to the process of thermographic evaluations, providing a new understanding of previous studies that sought to understand the population Tsk profile [38,39,40,41,42] without taking the %BF into account, which can lead to a misevaluation of the characteristics of the evaluated individuals and may cause an erroneous diagnostic action. Therefore, it is important that future studies that aim to draw a population’s thermal profile carry out their characterization in terms of %BF or anthropometric indexes related to this variable. In order to allow a better understanding of thermal images, a possible suggestion is to stratify different %BF classification ranges to establish more specific normality values for differences in body fat. Despite being considered the reference method for assessing %BF, DXA is an expensive technology with limited accessibility. In this way, researching the influence of body composition on Tsk, the body mass index (BMI) appears as a viable option; however, it requires a specific evaluation since different BMI classification ranges can also influence Tsk values in adolescents, as indicated by Reis et al. [ 34,44]. However, it is important to observe whether the subject performs resistance training activities since the total amount of muscle mass can influence the BMI. We emphasize that it was considered an inclusion criterion in this study to refrain from performing resistance training.
The observed results demonstrate that the Tsk values considered normal for individuals considered obese are different from those considered non-obese. Thus, male adolescents evaluated by IRT in search of diagnostic help on some muscle group pain should be framed in their respective body composition range to avoid general errors on the part of the clinical staff, for example. In addition, knowing this relationship can also influence an evaluation in sports where it is common for players to start the pre-season with higher body fat values or in sports that can be categorized by body weight, where it is normal to find different BF% and BMI patterns. Given the subject’s characteristics, understanding how and to what extent this factor can influence the IRT helps in decision making and in the evaluation process, mainly when the professional assembles a thermographic mapping of the subject throughout the season. Another possibility is to check for pathological skin changes ranging from malignancies (e.g., melanomas) and autoimmune disorders (e.g., atopic dermatitis or AD) to infectious conditions such as herpes simplex, which also lead to unique types of changes.
Since one of the limitations of the study was that it was only carried out with Brazilian male adolescents aged 16.79 ± 0.97 years, we suggest performing similar studies with different genders and age groups; for example, women, who tend to have greater %BF and may suffer more considerable alterations in Tsk values for that reason, or the elderly, who suffer orthopedic, metabolic, and thermoregulatory disturbances. Thus, these two population groups can benefit considerably from this strategy, allowing better evaluation of the resulting images and allowing the professional to understand whether the evaluated region is “hypo-radiant”, in a “thermal normality state”, or “hyper-radiant” depending on the clinical context that the patient is undergoing. In addition, we suggest conducting similar studies at different temperature and humidity ranges in the thermographic collection room to verify whether this can influence the results. We emphasize that the study was carried out within established standards for thermographic collection, and it was demonstrated that the participants felt thermally comfortable subjected to the temperature and humidity of the room. We also suggest that in future studies, the activity of the sympathetic neuro vegetative system be controlled since it may influence the measurement in comparative cases, as it was in the present study, by promoting changes in blood flow. It is essential to improve the application of the technique continuously.
It is important to highlight that the procedures for obtaining thermographic images in this study followed specific guidelines related to the collecting device. However, as the evaluation of IRT in humans is in constant technological evolution, we also suggest that other evaluations investigate whether different thermographic cameras (mainly with better resolution and precision) observe the same pattern of results presented in the present study.
Understanding the factors that can influence the Tsk values obtained by IRT is crucial for evaluating thermographic images to be used as an auxiliary tool in the diagnosis of the alterations in the individual’s normality pattern. Regarding the present study, the results make it clear that the %BF is a variable that must be considered in the thermographic image analysis, which can improve the use of IRT in clinical and sports environments and/or in the physical rehabilitation process.
## 5. Conclusions
Adolescents with a higher amount of body fat had lower Tsk values, with a negative correlation shown between them and influencing the evaluation of the thermographic image, which should be carefully observed. Normality classification values for Tsk were proposed according to the evaluated classification—with or without obesity—which can be used as a reference in the evaluation of thermographic images obtained in a collection environment similar to the one in the present study. |
# Availability of Medical Services and Teleconsultation during COVID-19 Pandemic in the Opinion of Patients of Hematology Clinics—A Cross-Sectional Pilot Study (Silesia, Poland)
## Abstract
Summary: A new virus, SARS-CoV-2, emerged in December 2019, triggering the COVID-19 pandemic in 2020 due to the rapid spread and severity of cases worldwide. In Poland, the first case of COVID-19 was reported on 4 March 2020. The aim of the prevention efforts was primarily to stop the spread of the infection to prevent overburdening the health care system. Many illnesses were treated by telemedicine, primarily using teleconsultation. Telemedicine has reduced personal contact between doctors and patients and reduced the risk of exposure to disease for patients and medical personnel. The survey aimed to gather patients’ opinions on the quality and availability of specialized medical services during the pandemic. Based on the data collected regarding patients’ opinions on services provided via telephone systems, a picture was created of patients’ opinions on teleconsultation, and attention was drawn to emerging problems. The study included a 200-person group of patients, realizing their appointments at a multispecialty outpatient clinic in Bytom, aged over 18 years, with various levels of education. The study was conducted among patients of Specialized Hospital No. 1 in Bytom. A proprietary survey questionnaire was developed for the study, which was conducted on paper and used face-to-face interaction with patients. Results: $17.5\%$ of women and $17.5\%$ of men rated the availability of services during the pandemic as good. In contrast, among those aged 60 and over, $14.5\%$ of respondents rated the availability of services during the pandemic as poor. In contrast, among those in the labor force, as many as $20\%$ of respondents rated the accessibility of services provided during the pandemic as being well. The same answer was marked by those on a pension ($15\%$). Overwhelmingly, women in the age group of 60 and over showed a reluctance toward teleconsultation. Conclusions: Patients’ attitudes toward the use of teleconsultation services during the COVID-19 pandemic varied, primarily due to attitudes toward the new situation, the age of the patient, or the need to adapt to specific solutions not always understood by the public. Telemedicine cannot completely replace inpatient services, especially among the elderly. It is necessary to refine remote visits to convince the public of this type of service. Remote visits should be refined and adapted to the needs of patients in such a way as to remove any barriers and problems arising from this type of service. This system should also be introduced as a target, providing an alternative method of inpatient services even after the pandemic ends.
## 1. Introduction
For a long time, coronaviruses were considered benign pathogens that cause respiratory symptoms of minor severity that resolve within a few days. The arrival of new infectious virus species has given rise to an increase in interest in these viruses. Before the emergence of the new SARS-CoV-2 coronavirus, a highly infectious species of SARS coronavirus had already appeared in the public, in 2002, causing a worldwide outbreak. Ten years after the SARS outbreak, new cases of the respiratory disease caused by the MERS coronavirus emerged, but this virus did not entail an outbreak. In contrast, a new SARS-CoV-2 virus emerged in December 2019, which triggered the COVID-19 pandemic in 2020 due to the rapid spread and severity of cases worldwide [1]. The Wuhan live animal and seafood market is considered the epicenter of COVID-19. In Poland, the first case was reported on 4 March 2020. The aim of the prevention effort was primarily to stem the spread of infection to prevent overburdening the healthcare system [2]. The most common symptoms present at the onset of SARS-CoV-2 coronavirus infection were dry cough, fever, general weakness, and muscle aches. The course of the infection largely depends on the age of the patient, and more severe symptoms are observed more often in the elderly than in children [1]. Most symptomatic patients have a mild form of the disease ($80\%$ of patients). In contrast, $14\%$ of symptomatic patients have a severe course of the disease, i.e., accelerated breathing, significant resting dyspnea, involvement of more than $50\%$ of the lung parenchyma, and saturation below $94\%$. A minority, of $6\%$ of patients have a critical course of the disease with acute respiratory distress syndrome, with multiple organ failure and septic shock [2]. In about $20\%$ of people, the disease is asymptomatic. To a large extent, the course of the disease and its severity depend on the patient’s immune response to infection. Coronavirus, SARS-CoV-2 is primarily transmitted between people by the droplet route, where close person-to-person contact is not necessary. For infection to occur, the virus must be transmitted to the mucous membranes of the throat, nose, or eyes. The minimum infectious dose of the virus has not been determined [3].
The pandemic continues to be a global threat to health care and the availability of health services. It has affected all countries and therefore health systems have had to adapt to the new situation to ensure rapid access to medical care. However, due to reduced access to medical services during this time, the functioning of the healthcare system has been disrupted. To curb the spread of the virus, many diseases were treated through telemedicine, primarily using teleconsultation. Telemedicine has reduced personal contact between doctors and patients and reduced the risk of exposure to disease for patients and medical personnel. However, telemedicine does not fully replace the interaction that occurs in face-to-face interactions [4]. In Poland, the majority of teleconsultations within the framework of so-called telemedicine and medical advice are carried out in contact through a telecommunications device (such as a telephone). According to estimates, this is $95\%$ of all teleconsultations. Other forms, such as video chat, are marginal [2,3,4]. Nonetheless, alternative modes of communication, such as online consultations and teleconsultation, have significant benefits in emergencies. Among other things, they provide patients with real-time information and professional advice from physicians during times of inaccessibility to medical facilities [5].
The purpose of the survey was to gather patients’ opinions on the quality and availability of specialized medical services during the pandemic. Based on the data collected regarding patients’ opinions on services provided via telephone systems, a picture was created of the opinions of clinic patients regarding teleconsultation, and attention was paid to emerging problems. It was assumed that the coronavirus pandemic negatively affected the quality and availability of medical services provided by public health care providers.
## 2.1. Study Organization
The study included a 200-person group of patients, completing their visits to specialized hematology outpatient clinics in Bytom (Silesia, Poland) (Scheme 1), aged over 18 years, with various levels of education. To anonymize the study, only data on gender, age, and the fact of treatment were collected. All data were coded with appropriate symbols, preventing the identification of patients by the Act of 29 August 1997, on the Protection of Personal Data (Journal of Laws of 1997, No. 133, item 883).
The primary criteria for inclusion were the patient’s written consent, expressed through participation in the survey, and that the patients be aged 18 or over. Participation in the study was anonymous and completely voluntary. The study adhered to the provisions of the Declaration of Helsinki and received a positive opinion from the Bioethics Committee of the Silesian Medical University in Katowice (ID: PCN/0022/KB/$\frac{211}{20}$).
## 2.2. Research Tool
A proprietary survey questionnaire was developed for the study, which was conducted on paper and used face-to-face interaction with patients. The survey questionnaire contained 17 closed questions. The first five questions (metric) were about gender, age, place of residence, education, and current occupational status. The remaining 12 questions were aimed at finding out the patients’ opinions on the teleconsultations conducted and assessing their availability and quality. The questionnaire was validated by administering it twice, two weeks apart, to a group of 30 people; in the first version, respondents were given a chance to express their opinion and indicate comments on the content of the questionnaire. The second time, the repetition of responses was tested. The reliability of the questionnaire was assessed using Cronbach’s alpha coefficient and was shown to be 0.83, which in psychological research indicates good reliability.
## 2.3. Study Sample
The study included 200 patients, most of whom were women ($58\%$). The largest number of respondents belonged to the age group of 60 years and older ($44\%$), and the smallest number belonged to the age group of 18–28 years ($9\%$). Of the respondents, $94.5\%$ were city residents and most had a secondary/vocational education ($68\%$). The surveyed patients were mostly employed ($50\%$) or retired ($49\%$) (Table 1).
## 2.4. Statistical Compilation
Statistical analysis was carried out using Statistica software (Statsoft, Poland). Multivariate tables were used in the calculations, individual groups of respondents were compared, and relationships between variables were analyzed. Mann–Whitney U and Kruskal–Wallis tests were used in statistical inference. The p-values <0.05 were considered statistically significant. For the results of the statistical inference, the abbreviation T is adopted in the text.
## 3. Results
In response to the question “How do you rate the availability of services provided during the COVID-19 pandemic?”, the majority of respondents rated the availability of services during the COVID-19 pandemic as good ($35\%$), and $25.5\%$ as definitely good. In contrast, $21.5\%$ of respondents marked the answer “difficult to say”, 34 people ($17\%$) rated the availability as bad, and only two people ($1\%$) as definitely bad. To the next question, i.e., “How do you rate the quality of services provided during the COVID-19 pandemic?”, $32\%$ of respondents rated the quality of services provided as good, and $27\%$ of people answered: “hard to say”. Another $20\%$ of respondents rated the quality as good, $15.5\%$ of respondents marked the answer “bad”, while only $5.5\%$ of people answered, “definitely bad”. When asked to evaluate the quality of the services provided through ICT systems, $30.5\%$ of respondents thought that the introduction of teleconsultation and its quality were good. $27.5\%$ had no opinion on the subject, while $21.5\%$ of respondents rated the quality of services provided through ICT systems badly. Of the respondents, $17\%$ gave a decidedly good rating, and $3.5\%$ gave a decidedly bad rating. Furthermore, $56\%$ of respondents indicated that the creation of teleconsultation during the COVID-19 pandemic was a good idea, while $44\%$ indicated that it was not a good idea. In response to the question “What do you like best about the advice provided through telephone or online systems?” ( respondents could indicate more than one answer), most respondents indicated the convenience of visiting without leaving home ($49.5\%$), $45.5\%$ marked safety related to the possibility of contracting a virus; however, $45\%$ indicated the answer “I don’t like this type of visit”. Additionally, $30.5\%$ of respondents indicated the lack of waiting in line, while only $17\%$ of people marked the answer that they had better contact with the doctor. In a question about possible problems arising when providing advice via ICT systems (again, it was possible to mark more than one answer), the largest number of people ($56\%$) indicated that they had not noticed any problems in this regard, $40.5\%$ of respondents had problems with connectivity, while $38.5\%$ of people had problems understanding the information provided, $32.5\%$ of respondents indicated poor contact with the doctor, and $26.5\%$ of people indicated a lack of examination. To the question “Do you think it would have been a good idea to conduct visits via ICT systems without the pandemic?”, $54\%$ said yes, while $46\%$ of people indicated a “no” answer. The same number of respondents, as with the previous question, answered the question “Are you willing to use the advice provided by the telephone method?” and $54\%$ indicated “yes”, while $46\%$ indicated “no”. Regarding the question about the attitude of medical personnel to the advice given by the telephone method, $34.5\%$ of respondents answered “difficult to say”, $28.5\%$ of people rated the attitude of medical personnel to the advice given as being well, as did $20.5\%$ of respondents. In contrast, the answer “bad” was marked by $14\%$ of people, and “definitely bad” by $2.5\%$ of respondents. In response to the question “Have you used other medical facilities that also provided telehealth appointments?”, $77.5\%$ of people answered that they had used telehealth elsewhere, while $22.5\%$ of people had not used this type of service elsewhere. The last question included only those who answered yes to the previous question, i.e., “Have you used other medical facilities where teleconsultation visits were also conducted?” and referred to 155 people. This question was about the evaluation of conducted visits to another facility via telehealth systems and $31.6\%$ of people rated the conducted visits to another facility via telehealth systems badly, $29\%$ of people did not comment, $26.5\%$ of respondents rated the visits well, $11\%$ of people marked the answer “definitely badly”, and $1.9\%$ of people marked the answer “definitely well”.
Referring to the question: “How do you rate the availability of services provided during the COVID-19 pandemic?”, a breakdown was made in the responses in terms of the number of women and men (Figure 1). Of men and women, $17.5\%$ rated the availability of provided services during the pandemic well, $15\%$ of women rated this availability strongly well, while only $10.5\%$ of men gave this rating (“strongly well”). A bad rating was given by $12.5\%$ of women and $4.5\%$ of men. The answer “definitely bad” was indicated by $1\%$ of men, and $0\%$ of women. In contrast, $13\%$ of women and $8.5\%$ of men had no opinion. There was no relationship between the variable’s gender and the evaluation of the availability of medical services during the COVID-19 pandemic ($p \leq 0.05$).
For the same question—How do you rate the availability of services provided during the COVID-19 pandemic?” for respondents by age (Figure 2), in the age group of 60 and over, $14.5\%$ of respondents rated the availability of services provided during the pandemic poorly. The answer good was marked by $4.5\%$ of people, and bad by $1\%$ of respondents. The same number, i.e., $12\%$ of respondents, marked the answer “good” and “hard to say”. In the 50–59 age group, the largest number of respondents answered “good” ($7.5\%$ of people). Six percent of respondents marked the answer “definitely good”, and “hard to say” was indicated by $4.5\%$. No one marked the answers “bad” and “definitely bad”. On the other hand, in the 40–49 age group, the highest number of responses was “good” ($7\%$). “ Good” was marked by $4\%$ of respondents, $2.5\%$ of people had no opinion on the subject, and $2\%$ of respondents indicated the answer “bad”. No one marked the answer “definitely bad”. Respondents in the 29–39 age group mostly indicated the answer “definitely good” ($5.5\%$), $5\%$ of people indicated the answer “good”, $0.5\%$ indicated the answer “bad”, and $2.5\%$ had no opinion. Additionally, no one marked the answer “definitely bad”. In contrast, in the 18–28 age group, there are only two ratings, i.e., “definitely good” ($5.5\%$) and “good” ($2.5\%$). A statistically significant relationship was found between the variable age and the evaluation of the availability of services during the pandemic. Those over 60 were more likely to negatively evaluate the availability of medical services provided during the COVID-19 pandemic ($T = 11.868$; $r = 0.632$; $$p \leq 0.001$$).
About the professional status of the respondents, the answers to the above question—“How would you rate the availability of services provided during the COVID-19 pandemic?” ( Figure 3)—were as follows: among working people, as many as $20\%$ of respondents rated the availability of provided services during the pandemic well, $19\%$ of working respondents indicated the answer “definitely well”, $3\%$ “poorly”, while $8\%$ had no opinion. No one marked the answer “definitely bad”. Those on a pension, on the other hand, mostly ($15\%$) marked the answer “good”. Of respondents, $14\%$ marked the answer “bad”, while $13.5\%$ had no opinion on the subject. In contrast, “definitely good” was marked by $5.5\%$ of people, and “definitely bad” by $1\%$. Those who were pupils or students ($1\%$) marked one answer—”definitely good”. There was a statistically significant relationship between the variable of occupational status and the assessment of the availability of services during the pandemic. Those who were employed/retired were more likely to negatively evaluate the availability of medical services provided during the COVID-19 pandemic ($T = 12.003$; $r = 0.614$; $$p \leq 0.002$$).
Another question asked “Are you willing to use telephonic advice?”, and respondents were grouped by age and gender (Figure 4). Overwhelmingly, reluctance to teleprompting was shown by women in the age group of 60 years and older ($T = 10.099$; $r = 0.703$; $$p \leq 0.001$$). The rest of the respondents’ answers were similar, so no differences were noted ($p \leq 0.05$). The more frequent response was “yes” among both women and men, regardless of age.
## 4. Discussion
The pandemic has changed the way healthcare services are delivered to patients around the world. To provide precautions and physical distancing during the COVID-19 pandemic, telephone consultation was provided as an alternative method to face-to-face visits, primarily in primary care (PCP) [6]. However, telemedicine also has some drawbacks, as it primarily focuses on the symptoms presented by the patient, patients are often not comprehensively examined and visual cues are often lacking. In addition, there are issues regarding the relationship between doctor and patient, or problems regarding the quality of the information provided [6]. Despite the drawbacks, telephone consultations were used during the pandemic because of their ability to deliver remote, essential health care to patients and to halt the spread of the virus [6].
A study by Zammit, et al. found that there was a significant improvement in patient satisfaction and an increased preference for telephone consultations [7]. Telemedicine during the pandemic made a huge impact mainly among older patients or patients with chronic diseases. The advantages of telephone telemedicine, in addition to preventing the transmission of infections, are convenience and saving time. However, the difficulty of checking and explaining the condition to patients, the possibly incomplete assessment of their health status, and the misunderstandings that can arise from a telephone consultation between a doctor and a patient negatively affect this type of medical service [8].
The COVID-19 pandemic has proven that telemedicine is a very helpful and desirable tool in healthcare. It allows for a personalized approach on the part of healthcare professionals toward patients and the establishment of positive interactions between them. This represents a very valuable aspect from the perspective of both parties. The use of telemedicine has made it possible to access medications (so-called e-prescriptions, electronic prescriptions), make diagnoses, implement comprehensive treatment, and, in addition, carry out health education processes, including issues related to the prevention of chronic diseases. Studies related to teleconsultation, which were conducted before the outbreak of the SARS-CoV-2 virus pandemic, did not show a significant decrease in effectiveness compared with traditional visits made in a stationary manner [9,10].
A study that was conducted in the context of the role and importance of telemedicine in the initial wave of the COVID-19 pandemic was the original work carried out by Fatyga et al. [ 11]. This study was related to elderly patients of a Silesian diabetes clinic. It involved 86 patients, aged ≥60 years, whose leading disease was type 2 diabetes. The study did not include patients with microvascular complications of diabetes, those who had suffered a stroke, were struggling with depression or other mental disorders, or were consuming excessive amounts of alcoholic beverages. The results obtained by the authors show that, for the most part, a significant number of patients—despite complying with all restrictions related to the sanitary-epidemiological regime, i.e., taking preventive behaviors—declared frequent or constant feelings of fear of contracting coronavirus disease. Consequently, alternatives such as the use of telemedicine were far more favorable to them due to the lack of real contact with other people, thereby offsetting the risk of potential illness due to COVID-19. The conclusions of the survey demonstrate the validity of the use of telemedicine, although it is worth considering measures to improve it. In addition, it seems non-negligible to conduct further scientific research, including clinical research, focusing on the issue of telephone and electronic medicine from the point of view of patients, which will allow more accurate interpretations regarding the adequate management of medical personnel in this area, as well as strengthening behavioral health strategies among the elderly population.
Patient satisfaction with the use of telemedicine can also vary depending on the availability of both face-to-face visits and teleconsultation [8]. In a study conducted on the satisfaction and importance of teleconsultation during the coronavirus pandemic among patients with rheumatoid arthritis, $62.3\%$ said the quality of teleconsultation was not as satisfactory when compared with in-person consultations [12]. In contrast, in another study on determining patients’ satisfaction with the quality of teleconsultation. Patients in the surveyed PCPs rated communication with the doctor and comprehensiveness of medical care the highest. The treatment used helped $47.5\%$ of patients improve their health [13]. Additionally, studies have been conducted on the use of telemedicine among asthma patients. However, the disadvantages brought to the fore regarding teleconsultation were the limited ability to perform tests, or the lack of personal contact between doctor and patient [14,15]. From a subsequent study conducted among 14,000 respondents on the satisfaction of patients using teleconsultation with their PCP during the pandemic, more than $40\%$ of respondents were satisfied with the teleconsultation provided and said that the quality of services provided in this way was comparable to the advice given in an inpatient manner. In contrast, $36.3\%$ of people rated the quality of an in-person visit to a PCP higher than a teleconsultation [16]. Thanks to telemedicine, people in high-risk groups, for example those with cardiovascular disease, diabetes, or Parkinson’s disease, were able to effectively monitor their health status during the pandemic, while maintaining constant contact with medical personnel [17].
The study also found that doctors and nurses showed lower satisfaction with teleconsultation than patients. Above all, medical personnel were concerned about emergencies that could occur due to the patient’s limited visualization during a telephone consultation. Telephone consultations tended to convey less information than video consultations; however, despite this, teleconsultation was preferred over video visits by both providers and patients, especially those who were less technologically advanced [8].
The nature of telemedicine may limit a provider’s ability to obtain a comprehensive physical examination, which is fundamental to a physician’s diagnostic arsenal. Of course, telemedicine does not apply to every scope, such as invasive procedures, dental procedures, or critically ill patients requiring in-person visits [8]. Lack of easy access to PCPs and specialized treatment has also been associated with widespread and higher levels of perceived anxiety among patients [18]. Inadequate access to reliable information has also fostered anti-vaccine movements [19].
In an era of efforts to curb the epidemic, it is essential to safeguard the health needs of both COVID-19-infected patients and other patients. It is also important that people who identify worrisome symptoms in themselves that may indicate the development of a condition should not give up on early diagnosis [20,21]. It should also be noted that the earlier a patient is diagnosed, the greater the chances of a faster recovery, which serves to minimize treatment costs burdening the healthcare system. Therefore, it is recommended that health promotion and disease prevention activities be increased, as well as broader health education for the public for both citizens as a whole and for patients suffering from various diseases [22]. Undoubtedly, the e-health solutions implemented so far, such as e-prescription, e-referral, teleconsultation, or video consultation with a doctor, have made it possible to secure the basic needs of patients to a large extent; nevertheless, it is necessary to improve them further as doing so will make the healthcare system more resilient to emergencies (including further epidemics) in the future [23,24]. Nevertheless, when implementing such solutions, intensified information and education campaigns should also be carried out, especially those that emphasize the development of digital competencies among senior citizens burdened with multiple diseases. The elderly, for example, have repeatedly reported difficulties in using the Internet Patient Account. In the future, it should also be pointed out that, among other things, hospitals should have procedures in place to take appropriate and proportionate action, particularly about restricting the exercise of patient rights [25,26]. This restriction should not be tantamount to a ban, leading to the deprivation of patients’ rights, and should not prevent the realization of the rights of persons authorized by the patient, or relatives [27]. There is an urgent need to further standardize the provision of health services using solutions that allow remote communication [28]. Telemedicine or video consultations should not completely replace in-person highly specialized medical consultations, they should be a form of support for the patient’s treatment process in emergencies, such as in the case of the next wave of COVID-19 or the emergence of a new pandemic. However, the development of telemedicine during the pandemic was undoubtedly necessary and essential but still needs to be refined [20,22]. During the pandemic, telemedicine was an alternative method of diagnosing, treating, monitoring, and distantly supporting patients who did not require face-to-face contact with medical personnel [27,29,30]. The study conducted by the authors of this paper indicates that patients’ attitudes toward the use of telemedicine services during the COVID-19 pandemic varied. Younger people rate the quality and accessibility of teleconsultation services well, in contrast to those over 60.
## Strengths and Limitations
The study is not free of limitations. The first limitation of the conducted survey is the scope of the research sample, which includes only one specialist outpatient clinic provider from one country. However, this sample was sufficient to test and validate the research tool—a questionnaire to assess patient satisfaction with the quality of remote medical care. In addition, despite the pandemic, the survey was conducted using a face-to-face survey method, which helped reduce researcher error and the risk of “bot/fake responders”, as is the case with similar surveys conducted using the computer-assisted web interview (CAWI) method. A survey of a larger number of respondents from across the country is planned for the follow-up survey stage, which will be conducted to finalize and update the results. The second limitation is that the very evaluation of the quality of remote advice came only from the point of view of patients, who are not qualified to substantively assess the effectiveness and selection of appropriate treatment methods. The indicated research limitation provides an interesting direction for further research that could address the evaluation of the quality of the treatment by qualified medical personnel or healthcare coordinators.
## 5. Conclusions
Patients’ approach to the use of teleconsultation services during the COVID-19 pandemic varies, primarily due to attitudes toward the new situation, the age of the patient, or the need to adapt to specific solutions not always understood by the public. The availability of medical services during the COVID-19 pandemic is rated significantly lower by the elderly (over 60) and the group of pensioners/retirees. There is no gender variation in respondents’ opinions.
Telemedicine cannot completely replace inpatient services, especially among the elderly. It is necessary to refine remote visits to convince the public of this type of service. Remote visits should be refined and adapted to the needs of patients in such a way as to remove any barriers and problems arising from this type of service. This system should also be introduced as a target, providing an alternative method of inpatient services even after the pandemic ends.
## Figures, Scheme and Table
**Scheme 1:** *Location of the research conducted.* **Figure 1:** *Evaluation of medical services during the COVID-19 pandemic compared with gender.* **Figure 2:** *Evaluation of medical services during the COVID-19 pandemic compared with age.* **Figure 3:** *Evaluation of medical services during the COVID-19 pandemic compared with professional status.* **Figure 4:** *Assessment of teleconsultation during the COVID-19 pandemic compared with age and gender.* TABLE_PLACEHOLDER:Table 1 |
# Nutritional Content of Popular Menu Items from Online Food Delivery Applications in Bangkok, Thailand: Are They Healthy?
## Abstract
The rise in online food delivery (OFD) applications has increased access to a myriad of ready-to-eat options, which may lead to unhealthier food choices. Our objective was to assess the nutritional profile of popular menu items available through OFD applications in Bangkok, Thailand. We selected the top 40 popular menu items from three of the most commonly used OFD applications in 2021. Each menu item was collected from the top 15 restaurants in Bangkok for a total of 600 items. Nutritional contents were analysed by a professional food laboratory in Bangkok. Descriptive statistics were employed to describe the nutritional content of each menu item, including energy, fat, sodium, and sugar content. We also compared nutritional content to the World Health Organization’s recommended daily intake values. The majority of menu items were considered unhealthy, with 23 of the 25 ready-to-eat menu items containing more than the recommended sodium intake for adults. Eighty percent of all sweets contained approximately 1.5 times more sugar than the daily recommendation. Displaying nutrition facts in the OFD applications for menu items and providing consumers with filters for healthier options are required to reduce overconsumption and improve consumer food choice.
## 1. Introduction
Each year, noncommunicable diseases (NCDs) are responsible for 41 million deaths globally [1]. In Thailand, NCDs cause $75\%$ of all deaths, with cardiovascular diseases (CVDs) accounting for the highest proportion [2]. Dietary factors, including increased intake of salt, fats, and sugars, are the biggest contributor to CVD risk [3]. Transnational food and beverage corporation practices have reshaped the dietary landscape through a combination of food availability, pricing, and social and cultural desirability [4], all of which have made unhealthy foods more readily accessible. In particular, food and beverage businesses have expanded their service channels to online food delivery (OFD) applications in order to provide convenience for consumers, a strategy which has also led to increased product sales [5]. The proliferation of OFD applications has provided a broader portion of the Thai population with direct access to non-traditional and ready-to-eat foods, which have the potential to disrupt good health and well-being [6].
OFD applications are currently considered a significant predictor of food choice [7] and eating [8] among the general population. The Thai food delivery market has grown rapidly, expanding from 61,000 million baht in 2019 to 68,000 million baht during the COVID-19 pandemic in 2020 [9], and further still to 105,000 million baht in 2021 [10]. The percentage of foods ordered from OFD applications increased from $3.9\%$ to $10.7\%$ between 2019 and 2020 [11]. In 2020, $85\%$ of Thai people utilised OFD applications, with $61\%$ of that group ordering fast food, such as fried chicken, burgers, and pizzas [12]. Use of OFD applications has grown more popular than restaurant dining or takeout among Thai people due to the convenience of searching for food items and finding new restaurants through these applications [13]. Food delivery trends in Thailand are similar to those found in other countries. Evidence from Australia, New Zealand, Canada, the United States, and the Netherlands have shown that most menu items, including the most popular items on OFD applications, were unhealthy [14,15,16,17] because of their high levels of salt, sugar, and/or saturated fats [14,15,16].
Raising public awareness about dietary guidelines and package labelling are some of the most common strategies utilised to educate the public about healthy diets [18]. Thailand established government policies to tackle unhealthy diets, specifically for packaged foods. Interventions such as the Guideline Daily Amounts (GDAs) label and “Healthier Choice” nutritional logos on selected packaged food products have been used to raise awareness about healthy eating [19,20]. However, these interventions only apply to packaged foods and do not encompass OFD applications. Although a previous study has assessed the nutrition information displayed on ready-to-eat packaged foods and the nutritional quality of those food products in Thailand [21], no data currently exists on the nutritional content of foods offered through OFD applications in Thailand. This study aims to address this gap by exploring the nutritional profile of popular menu items (food and non-alcoholic beverages) available through OFD applications. The goal is to raise public awareness about the nutritional content of foods delivered through these services and inform ongoing policy development and implementation for tackling unhealthy diets and NCDs in Thailand.
## 2. Materials and Methods
We conducted a cross-sectional, exploratory study to describe the nutritional contents of the most popular food and drink items available on OFD applications in Bangkok, Thailand. We summarised the nutritional contents from the 40 most popular menu items based on energy, total fat, sodium, and total sugar, and compared them against recommended daily intake values. This study received approval from FHI 360’s Office of International Research Ethics (report number 1892564-2).
## 2.1.1. Selection of Online Food Delivery Applications
Three OFD applications (Grab, Lineman, and Robinhood) were purposively selected due to their high cumulative utilisation rate among all OFD platforms; approximately $89\%$ of Thai people used these applications when ordering their food and drinks through an OFD application [12]. Furthermore, they have consistently maintained their positions as leaders in Thailand’s OFD market [22]; Grab had the highest market share ($50\%$), followed by Lineman ($20\%$), and Robinhood ($7\%$) [23].
## 2.1.2. Identification of Popular Menu Items
Data on the most popular menu items from Grab, Lineman, and Robinhood were compiled and saved in Microsoft Excel between May and June 2021 [24,25]. Each application had its own list of most popular items, with each list differing slightly due to varying consumer preferences. All popular menu items across the three applications were selected for a total of 80 menu items. Next, 20 menu items were removed due to duplication. The remaining 60 menu items consisted of 20 items from Grab ($33\%$ of total menu items), 21 from Lineman ($35\%$), and 19 from Robinhood ($32\%$). However, given budget constraints for food nutrition analysis, the target population was reduced to 40 menu items. These items were selected based on their popularity ranking in each OFD application while maintaining the same proportion of items from the original sample size. Therefore, the target population comprised the top 13 items from Grab, the top 14 from Lineman, and the top 13 from Robinhood (Figure 1).
## 2.2. Data Collection
After identifying the most popular items across the three applications, Grab was ultimately used as the sole OFD application to order these items for data collection as it is the most popular OFD application in Bangkok [25,26,27,28]. The final 40 menu items were categorised into three different types: 25 ready-to-eat items, 5 sweets, and 10 non-alcoholic beverages. Research team ordered each menu item from each restaurant from the top 15 restaurants in Grab as nominated by consumers. For consistency, the order time was set between 8.00 a.m. and 12.00 p.m. during standard restaurant operating hours. The research team selected one standard portion of each menu as the default. Since restaurants have varying portion sizes, the research team recorded the weight of each sample to calculate a portion size average for each item and ensure more accurate results. All items were ordered within a one-month period (4 January to 1 February 2022). Delivery drivers for OFD applications delivered menu items to a laboratory, and each menu item was tested for nutritional content the day it was received (minimum of 500 g of sample needed).
## 2.3. Data Analysis
Each menu item’s nutritional contents were evaluated in terms of energy, total fat, sodium, and total sugar, as overconsumption of these nutritional contents is one of the risk factors associated with NCDs [29,30,31]. We opted to evaluate total fat instead of saturated fats due to budget and time constraints. Nutritional analysis for the items was conducted by Central Laboratory Co., Ltd., Bangkok, Thailand [32], with nutritional contents classified using chemical analysis [33]. The research team summarised the average, minimum, and maximum values of each item’s nutritional profile. SPSS version 26 was used for analysing the variation of nutritional content among the menu items.
The outcomes of this study were compared with national and international standards, as listed in Table 1. Since recommendations for total fat and total sugar intake are calculated on a daily basis, the research team calculated the recommended intake per portion for total fat and total sugar. This entailed dividing the daily recommended intake by three based on the assumption that one portion is equivalent to one meal and there are three meals in a day. Menu items with contents higher than the recommended criteria were categorised as “unhealthy menu items”.
## 3.1. Nutritional Composition per 100 Grams
Overall, 40 menu items from 15 restaurants in Bangkok were classified into three food types: 25 ready-to-eat items, 5 sweets, and 10 non-alcoholic beverages.
Table 2 shows the nutritional content per 100 g for each of the 40 items. Overall, fried streaky pork and grilled pork neck were extremely high in energy and total fat per 100 g compared to other ready-to-eat items. Fried streaky pork had the highest energy and total fat (mean energy = 440.5 per 100 g; mean total fat = 36.3 per 100 g), followed by grilled pork neck (374.5 g and 30.8 g, respectively). In terms of sodium content, spicy papaya salad with northeastern style fermented crab and fish was especially high in sodium (1.6 g per 100 g)—nearly equivalent to the daily recommended maximum sodium threshold of 2 g. Grilled pork balls (0.8 g per 100 g) and grilled pork (0.8 g per 100 g) were also high in sodium. In terms of sugar, pandan and coconut chiffon cake ranked highest in total sugar (23 g per 100 g), followed by iced honey lemon tea (19.7 g per 100 g), and iced cocoa (16.2 g per 100 g).
## 3.2.1. Energy
Figure 2 illustrates the energy content of all 40 menu items. Fried streaky pork contained the highest average energy content per portion (814.9 kcal), followed by grilled pork (811.5 kcal), and rice with stir-fried minced pork, chili, and basil (734.4 kcal). Among sweets, pandan and coconut chiffon cake was the highest in average energy (1098.8 kcal), followed by egg tart (678.5 kcal). For non-alcoholic beverages, bubble milk tea was the highest in average energy (417.9 kcal), followed by iced green tea Frappuccino (382.2 kcal) and iced coffee (336.5 kcal).
The WHO’s recommended average daily energy requirement for adults is 2100 kcal per person per day, i.e., not more than $30\%$ of recommended daily total energy [34]. Fried streaky pork contained an average of 814.9 kcal per portion (around $39\%$ of total daily intake), and the pandan and coconut chiffon cake (307 g) provided an average of 1099 kcal per portion (around $50\%$ of total daily intake). Bubble milk tea had an average 418 kcal per portion (around $20\%$ of total daily intake).
Additionally, among all menu items, 7 were categorised as “unhealthy” in terms of energy content (five ready-to-eat foods and two sweets). The most “unhealthy” item was fried streaky pork followed: grilled pork; grilled pork neck; grilled pork balls; papaya salad, spicy, with dried shrimp and roasted peanuts; papaya salad, spicy, with fermented crab and fermented fish northeastern style; pandan and coconut chiffon cake; and egg tart. Notably, none of the non-alcoholic beverages fell into the “unhealthy” category.
## 3.2.2. Total Fat
Six ready-to-eat items and three sweets contained higher fat than the WHO’s recommendation, but none of the non-alcoholic beverages were above the recommended threshold. The average total fat content per portion was highest for fried streaky pork (67.1 g), followed by grilled pork (55.6 g) and grilled pork neck (46.8 g). For sweets, the average total fat content per portion was greatest for pandan and coconut chiffon cake (65.1 g), followed by egg tart (45 g) and coconut milk ice cream (30.9 g) (Figure 3). Although these menu items consist of just one meal, they already contain nearly all of the WHO’s recommended total daily fat intake [34,35].
## 3.2.3. Total Sodium
Overall, 8 out of 25 ready-to-eat items were very high in sodium (as measured by the daily sodium intake threshold of 2 g) and 23 of 25 ready-to-eat “unhealthy” menu items contained more than the recommended sodium intake for adults of 0.6 g per meal (Figure 4). For reference, the WHO suggests that a person should consume less than 5 g of salt (approximately 2 g sodium) per day [36].
Mean sodium levels were much higher when reported per portion rather than per 100 g. The average total sodium content per portion was greatest for spicy papaya salad with fermented northeastern style crab and fish, Chinese pork bun, and iced coffee. One portion of spicy papaya salad with fermented northeastern style crab and fish (313 g) contained 5 g of sodium, and the average portion for Chinese pork bun contained 0.8 g of sodium. High sodium was not only found in ready-to-eat items but also in non-alcoholic beverages. Iced coffee was found to have the highest amount of sodium per portion among non-alcoholic beverages at 0.3 g.
## 3.2.4. Total Sugar
Eight non-alcoholic beverages were considered unhealthy (more than 25 g of sugar per portion) (Figure 5). The WHO recommends that adults and children reduce their daily intake of free sugars from less than $10\%$ of their total energy intake to less than $5\%$, or roughly 25 g (6 teaspoons) per day [35,37,38,40]. All non-alcoholic beverages, except for soy milk and iced Americano, contained an average of 33.9 g of sugar per portion, and all sweets except for egg tart and deep-fried Chinese dough contained an average of 31.5 g of sugar per portion; this is almost 1.5 times higher than the daily recommendation.
Notably, the average sugar content per menu item may not be indicative of whether a certain item is “healthy” or “unhealthy” in the Thai context when compared to the WHO’s recommendation. Although the average sugar content of an item may show that it is “healthy”, this is also based on the average portion size and the standardisation of ingredients. Thus, if a certain item’s portion size happens to be much larger than the average, or if a certain restaurant’s recipe uses more sugar than normal, it is possible that the item may be categorised as “unhealthy”. For example, the average sugar content of rice with salmon was 7.7 g, which is considered “healthy”. However, the sugar content range for this menu item was 0–21.6 g, with the maximum value close to the WHO recommended daily sugar intake (25 g).
## 4. Discussion
Most of the menu items were considered unhealthy, with higher levels of energy, total fat, sodium, and total sugar compared to the recommended daily intake. The findings of this study correspond to similar studies in China and Canada where the nutritional quality of OFD foods was generally low [41] and did not meet healthy eating recommendations [16]. The nutritional information generated from analysing the 40 menu items can serve as a launching point for both practical actions in the form of regulating information provided through OFD applications and raising consumer awareness about nutritional contents. The large variations in total fat, sodium, and sugar content observed when comparing menu items per portion and per 100 g indicate that opportunities exist for improvement. This can be achieved by standardizing portion size or showing nutritional facts for menu items through the OFD applications, particularly for sodium, sugar, and fat. These approaches may reduce the overconsumption of unfavourable nutrients, and are strategies advocated for addressing NCDs [42].
## 4.1. Energy Content
When analysing these menu items, a portion of fried streaky pork delivered via OFD applications contained $39\%$ of the WHO’s recommended daily energy intake for adults, and $37\%$ and $46\%$ of the recommended daily energy intake for Thai men and women aged 19 to 50 years, respectively, based on the Department of Health (DOH), Ministry of Public Health (MoPH). This does not account for any additional accompaniments, such as rice (one ladle) that can add approximately 80 extra calories [38]. Furthermore, the DOH recommends aiming for approximately 400–600 calories for a main meal [38]. Many menu items, including drinks, contain nutrients that are higher than the DOH recommendations for daily caloric intake. Restaurants should consider improving the overall nutritional profile of these items by reformulating the recipe or cooking method, or by reducing portion size.
## 4.2. Total Fat Content
Six ready-to-eat items and three sweets had higher fat content than the WHO’s and DOH’s recommended total fat intake, which is 20–$35\%$ (44–78 g) of total energy intake for Thai adults [35]. This is particularly problematic since desserts are likely to be consumed alongside a main meal, meaning consumers are consuming more fat than recommended in a single meal. Restaurants should consider substituting ingredients with lower fat alternatives. For example, since one of the main ingredients in pandan and coconut chiffon cake is high fat oil, bakery shops should consider substituting this with reduced fat oils.
## 4.3. Sodium Content
Sodium content was particularly high among the menu items assessed, which did not include any condiments that are often added to meals. WHO evidence revealed that Thais consume an average of 10.8 g of salt per day or 4.2 g of sodium in their current lifestyle, which was more than double the recommended daily amount of salt in 2015 [43]. A cross-sectional population-based survey conducted in Thailand in 2021 revealed that average sodium consumption among Thai adults was 3.6 g per day [44]. Our study supports this finding since many popular menu items in our analysis were also found to be high in sodium, and recipes with alternatives to sodium, such as low sodium condiments, were not popular due to higher prices. Thailand has set an ambitious goal of reducing the population intake of salt/sodium by $30\%$ [45]; this is in line with the WHO’s global voluntary targets for a $30\%$ relative reduction in mean population intake of salt/sodium by 2025 (relative to 2010 levels) [46]. Based on the WHO’s and DOH’s recommendations for daily and per meal sodium intake, restaurants should reduce sodium content by reformulating their recipes and providing nutritional information through OFD applications to enhance consumer awareness and transparency.
## 4.4. Sugar Content
All non-alcoholic beverages (except for soy milk and iced Americano) and all sweets (except for egg tart and deep-fried Chinese dough) contained average sugar content higher than the daily recommendation. A new WHO guideline recommends that ‘free’ sugars make up no more than $10\%$ of daily kilojoule intake [37]. Notably, total sugar refers to the total amount of sugar from all sources (free sugars plus those from milk and those present in the structure of foods such as fruit and vegetables). Our nutritional analysis does not distinguish between naturally occurring sugars and free sugars. However, it is likely that the sugar content of the various papaya salads, pandan and coconut chiffon cakes, and iced honey lemon teas exceeded the WHO’s and DOH’s daily sugar recommendation for adults [35,37,38,40].
## 4.5. Policy Implications
Revising the Thai national policy could be another method for tackling sugar consumption. An updated excise tax has been applied to sugary drinks since 16 September 2017 [47]. The levy on sugary drinks is capped at $20\%$, with beverages containing more sugar carrying a larger tax burden than less sweet beverages [48]. However, this policy focuses on sweetened beverages in the form of packaged foods sold at retailers or supermarkets. The results of our study found that almost all sweets and non-alcoholic beverages are not categorised as packaged food since foods sold at restaurants are not required to be labelled. Despite their lack of inclusion in the policy, there is scope for restaurants to revise their recipes to reduce sugar content while concurrently displaying nutritional facts on OFD applications to help consumers make informed food choices that contribute to a healthy diet.
In addition to the policy strategies suggested above to reduce consumer intake of foods high in fats, sodium, and sugars, relevant public entities could collaborate or partner with OFD application developers to provide healthier food options. This can be accomplished in several ways. First, a voluntary upper limit could be set for sugar, fat, and sodium in the OFD applications or in restaurant menu details to indicate that the item is a “healthier” option. If a menu item is under the threshold, it can be indicated as “healthier”. Second, OFD application developers could design settings to allow consumers to filter options when they order. For example, they can choose to filter foods or restaurants by “less salt”, “less sweet”, and “less fat”. Third, public entities and developers could work together with restaurants to set and implement standardised portions for menu items available through the application. Finally, a logo can be designed for use on OFD applications to inform consumers that the food is healthy.
Restaurants should know whether the foods they are selling are unhealthy or not. The Bureau of Nutrition, MoPH has produced the Thai Nutri Survey Program (TNS) and the relevant manuals to address this issue. Therefore, social marketing should be used to promote this program among restaurants or public to raise awareness and provide the tools to analyse and monitor the nutritional content of their menu items. Consequently, restaurants will know how healthy their menu items are. This nutrition content should also be shown on the application to provide information for consumers. This will enable them to make informed food choices when ordering.
## 4.6. Strength and Limitations
To the best of our knowledge, this study is the first to investigate the nutritional content of popular menu items from OFD applications in Thailand. It analysed nutritional content with assistance from a professional laboratory, thus providing objective data results and helping to reduce the knowledge gap related to nutritional information for some of Thailand’s most popular foods and drinks. However, the study also has several limitations. First, this study only considered 15 restaurants in Bangkok that were ranked based on popularity by Grab and did not consider popular menu items from other applications or locations. In addition, popular menu items obtained through the OFD applications are only valid within the study period and may only be relevant to Bangkok. Therefore, these findings may not be relevant to popular menu items outside the study duration if recipes are changed, or to other parts of the country. However, because this study had wide-ranging results, adding more restaurants is unlikely to significantly affect the results. Second, budget constraints prohibited the addition of condiments into the analysis. Future studies should include condiments to provide more accurate results that represent a complete meal and improve understanding of typical consumption patterns. In addition, no data exists that compares home-cooked foods with OFD foods; it would be helpful to investigate whether the same menu items made at home are healthier.
Finally, the WHO’s most recent guidelines for daily energy, fat, salt, and sugar intake were used to evaluate salt and sugar levels in popular menu items. Although these guidelines are based on scientific evidence [46], limitations exist. The guidelines do not classify gender and age so average values may not be fully applicable in the Thai context due to differences in physiology between Thais and people of other races/ethnicities. Moreover, most menu items in this study did not meet international standards for energy and fat per meal. Further exploration is required to obtain a more accurate standard to assess the healthiness of foods.
## 5. Conclusions
OFD platforms are becoming popular, with an increasing number of orders for ready-to-eat foods, sweets, and non-alcoholic beverages. However, we found that most single items purchased through OFD applications in Bangkok contained levels of energy, total fat, sodium, and total sugars that were close to or exceeded recommended daily intakes. This creates additional challenges for public health nutrition policymakers, though OFD platforms may also provide an opportunity to improve public health nutrition and diet-related health outcomes using certain policy levers. It will be important for relevant entities under the MoPH—NCD Division and DOH—to collaborate with OFD application developers to use their influence and promote healthy food consumption. Such a public-private partnership may help increase the availability of healthy choices while also nudging consumers towards these options. Going forward, the nutritional contents of popular menu items should be randomly assessed. Condiments and other menu items from OFD applications not included in this assessment, as well as items from restaurants in other Thai provinces, should be included in future studies to increase the comprehensiveness of nutritional content measurement and analyses in Thailand. |
# Bidirectional Comorbid Associations between Back Pain and Major Depression in US Adults
## Abstract
Low back pain and depression have been globally recognized as key public health problems and they are considered co-morbid conditions. This study explores both cross-sectional and longitudinal comorbid associations between back pain and major depression in the adult population in the United States. We used data from the Midlife in the United States survey (MIDUS), linking MIDUS II and III with a sample size of 2358. Logistic regression and Poisson regression models were used. The cross-sectional analysis showed significant associations between back pain and major depression. The longitudinal analysis indicated that back pain at baseline was prospectively associated with major depression at follow-up (PR 1.96, CI: 1.41, 2.74), controlling for health behavioral and demographic variables. Major depression at baseline was also prospectively associated with back pain at follow-up (PR 1.48, CI: 1.04, 2.13), controlling for a set of related confounders. These findings of a bidirectional comorbid association fill a gap in the current understanding of these comorbid conditions and could have clinical implications for the management and prevention of both depression and low back pain.
## 1. Introduction
Low back pain and depression have been recognized as major public health problems in the world. Low back pain has been globally ranked the highest cause of disability and years lived with disability among various diseases [1]. Depression has similarly been documented as a leading cause of global health-related burden and disability [2].
Low back pain and depression frequently occur together and are seen as co-morbid conditions [3]. Substantial research has been conducted on the comorbid association between low back pain and depression in the past few decades, but the published research has been marked with inconsistency and controversy [4,5,6,7]. The primary question arising from this literature is whether depression is the cause of low back pain or the result of the chronicity of back pain. There have been three hypotheses related to this question: (a) depression increases the risk of low back pain, (b) low back pain increases the risk of depression, and (c) the association of chronic low back pain and depression is bidirectional [8,9]. Compared with the first two hypotheses, there has been less research investigating the possible bidirectional association hypothesis [6,7,9].
Much of the controversy in the literature can be attributed to the type of study population that has been commonly used, which is that of patients [9]. Patients with low back pain usually have a higher prevalence of depression, and patients with depression have an increased likelihood of symptoms of low back pain than the general population [10].
There have been limited population-based studies and even fewer using U.S. population databases. An initial examination of the cross-sectional association between chronic musculoskeletal pain and depression in the U.S. population indicated significantly increased risk for depression in participants with chronic musculoskeletal pain than those without [11]. However, the data for that study was the first National Health and Nutrition Examination Survey (NHANES I, 1971–1974) and is over a half century old [11]. Other cross-sectional studies conducted in different parts of the world have also indicated a linkage between depression and low back pain [12,13,14].
Longitudinal associations between depression and low back pain have been under studied and the limited evidence has not been consistent [4]. In a randomized controlled clinical trial with 18 months follow-up, Hurwitz and colleagues found bi-directional associations between low back pain and psychological distress using both cross-sectional and longitudinal assessments [6]. Another longitudinal study conducted in Canada focused on a population-based, random sample of adults followed up at 6 and 12 months. This study indicated an independent and robust relationship between depressive symptoms and onset of an episode of spinal pain [4]. However, a third study using adult twins conducted in Spain indicated no significant association between chronic low back pain and the future development of depression [14].
The goal of this study is to explore the cross-sectional and longitudinal comorbid associations between major depression and back pain in a national sample of adults in the U.S. using data from the Midlife in the United States Survey (MIDUS) with a population-based prospective design. The analysis focuses on the comorbid association between depression and back pain, controlling for demographic and socioeconomic factors, and health behavioral factors.
## 2. Materials and Methods
The data used for this study came from the MIDUS, which is aimed at investigating behavioral, psychological, and social factors for health and wellbeing in a national sample of Americans. The MIDUS was developed with a prospective population design. The MIDUS I was conducted in 1995–1996, MIDUS II was conducted in 2004–2006, and MIDUS III was conducted in 2013–2014 [15]. This study used the longitudinal data of MIDUS II and III, with a 9-year follow-up period.
## 2.1. Study Population
The MIDUS collects data through telephone interviews and a self-administered questionnaire (SAQ). In total, 4963 participants who were 30 years of age and above in the MIDUS II were included in the baseline (T-1) for the study, as indicated in Figure 1. However, there were 922 participants who were not part of the SAQ and did not provide data for back pain. An additional 41 participants had no answer to the question on back pain and there were 557 participants with missing data for covariates. Thus, those without data on back pain or covariates were excluded, leaving 3443 participants for T-1, which we used as the sample for the cross-sectional analysis. After 9 years, 882 participants who were lost to follow up, 203 participants who did not have data on back pain (161 participants were not part of the SAQ, and 42 participants had missing data for back pain) at MIDUS III (T-2) were excluded. The final sample size used for the current analysis was 2358 (Figure 1). For the longitudinal analysis on the association between back pain at T-1 and major depression at T-2, we included 2109 participants who were free of major depression at T-1 (249 participant with major depression at T-1 were excluded). For the longitudinal analysis on the association between major depression at T-1 and back pain at T-2, we included 1790 participants who were free of back pain at T-1 (568 participants with back pain at T-1 were excluded).
## 2.2. Measurements
For detailed information on the key variables for the analysis, please see Appendix A.
## 2.2.1. Back Pain
Back pain was assessed by an independent question that focused on frequency of backaches. A respondent’s answer of experiencing backaches “almost every day” or “several times a week” in the past 30 days was defined as back pain.
## 2.2.2. Major Depression
Major depression was assessed through a pre-coded dichotomous variable based on the Composite International Diagnostic Interview Short Form (CIDI-SF) [16]. Two domains were included in the assessment: Depressed Affect and Anhedonia. For more information, please see Appendix A.
## 2.2.3. Health Behaviors
Assessments of health behavioral factors included four variables: leisure-time physical activity, tobacco use, alcohol consumption, and obesity. Leisure-time physical activity was coded as a variable with three categories: active (vigorous or moderate physical activity several times a week), insufficiently active (vigorous or moderate physical activity once a week to less than a month), and inactive (no moderate or vigorous physical activity at all). For more information, please see Appendix A. Current tobacco use was coded as a dichotomized variable with four questions. “ Yes” was based on the question, “Do you now smoke cigarettes regularly?”, “ No” was based the questions, “Age had first cigarette?”, “ Ever smoked cigarettes regularly?”, and “Do you now smoke cigarettes regularly?” Alcohol consumption was coded as a nominal variable with three categories: non-drinkers, light to moderate drinkers, and heavy drinkers. Obesity was assessed based on self-reported weight and height and was classified as a body mass index (BMI) > 30.
## 2.2.4. Demographic and Socioeconomic Characteristics
Demographic and socioeconomic factors included in the analysis were: sex, age, race/ethnicity, education, and personal earning. Race/ethnicity was coded as two groups: Non-Hispanic White and others. Age was coded into three age groups by years: 30–49; 50–59; and 60–76 and over. Education was assessed through the question: “*What is* the highest grade of school or year of college you completed?” The response was coded into three categories: high school or less than high school; some college; and college and above. Personal earning was based on the original income variable with a sum of responses to the questions on personal earning income of the respondent, pension income of the respondent, and social security income. It was coded into three categories: <$19,999; $20,000–$59,999; $60,000–$200,000 and above.
## 2.3. Statistical Analysis
The main goal of the analysis is to investigate the question whether major depression is prospectively associated with back pain and whether back pain is linked to subsequent major depression. All data were analyzed using Stata12.1 [17]. Analyses were performed on individuals with complete data.
To describe the characteristics of the study sample at T-1, we first conducted the descriptive analysis on the prevalence of major depression and back pain, characteristics of the study participants in terms of (age, sex, and race/ethnicity), socioeconomic status (education and personal earning), and behavioral factors (leisure-time physical activity, tobacco use, alcohol consumption, and obesity). In addition, we conducted the bi-variate analysis between the two key health outcome variables at T-1 and the characteristics of the participants, with the Pearson’s test.
We then conducted multivariable cross-sectional and longitudinal analyses to explore comorbid associations between major depression and back pain. We constructed models based on several studies that examined the association between back pain and depression, using the demographic characteristics (age, sex, education, and earning) and health behavioral factors (leisure-time physical activity, tobacco use, alcohol consumption, and obesity) as confounders [4,5,6]. Race/ethnicity was not controlled in the four models of cross-sectional and longitudinal associations due to the disproportionally high percentage of Non-Hispanic White participants in the data.
Model 1 focused on cross-sectional comorbid association between major depression at T-1 and back pain at T-1 with multivariable logistic models, controlling for demographic characteristics (age and sex), socioeconomic status (education and personal earning), and behavioral factors (leisure-time physical activity, tobacco use, alcohol consumption, and obesity). Model 2 focused on the cross-sectional association between back pain at T-1 and major depression at T-1, controlling for demographic characteristics, socioeconomic status, and behavioral factors.
Models 3 and 4 were constructed to focus on longitudinal comorbid associations of major depression and back pain with Poisson Regression models. Model 3 focused on longitudinal associations of back pain at T-1 and major depression at T-2, following a group of participants with no major depression at T-1 and controlling for demographic characteristics, socioeconomic status, and behavioral factors. Model 4 focused on major depression at T-1 and back pain at T-2, following a group of participants without back pain from T-1, controlling for demographic characteristics, socioeconomic status, and behavioral factors.
## 3.1. Baseline Characteristics
Table 1 shows the prevalence of major depression and back pain, characteristics of the study participants, and bi-variate associations at the baseline (T-1). The prevalence of major depression for those with back pain was $17.4\%$, which was higher than that of the general study population ($10.5\%$) at T-1. The prevalence of back pain within those with major depression ($37.4\%$) was also higher than that of the general study population ($22.5\%$) at T-1.
For demographic characteristics, $55\%$ were female, $36\%$ were aged 60 to 70 and over, and over $90\%$ of the participants were Non-Hispanic White. The bi-variate association between major depression and the main demographic factors were significant, with the exception of race and ethnicity. Age was inversely related to major depression. There was a higher proportion of female participants with major depression ($24.1\%$) and a higher proportion of female participants with back pain ($14.1\%$). For socioeconomic status, education and earning distributions were both inversely related to both major depression and back pain, although the prevalence levels varied. For health behavioral factors, the distribution of the level of leisure-time physical activity was inversely related to complaints of back pain. The lower the level of leisure-time physical activity, the greater the likelihood of back pain. However, current smoking was significantly related to both back pain and major depression.
## 3.2. Cross-Sectional Multivariable Associations
The cross-sectional analysis of multivariable associations is shown in Table 2. Model 1 in Table 2 indicates that major depression at T-1 was significantly associated with back pain at T-1 (aOR 2.13, CI: 1.68, 2.71), controlling for demographic and health behavioral factors. Model 2 in Table 2 shows that back pain at T-1 was significantly associated with major depression at T-1 (aOR 2.11, CI: 1.66, 2.69), controlling for demographic and health behavioral factors. Bidirectional cross-sectional associations between major depression and low back pain were seen.
## 3.3. Longitudinal Associations
In exploring the bidirectional associations between back pain and major depression, Model 3 in Table 3 shows that back pain at T-1 was significantly associated with major depression at T-2 (PR 1.96, t: 1.41, 2.74), controlling for demographic variables and health behavioral factors. Female adults had a prospectively increased risk of major depression (PR 1.87, CI: 1.32, 2.65), and adults who currently smoked at T-1 were more likely to have major depression at T-2 (PR 1.75, CI: 1.17, 2.62). Furthermore, light to moderate drinkers of alcohol at T-1 may have had a lower risk of major depression at T2.
Model 4 in Table 4 shows that major depression at T-1 was associated with back pain at T-2 (PR 1.48, CI: 1.04, 2.12), controlling for a set of confounders. Older adults aged 60 to 75 and older at T-1 were more likely to have back pain at T-2 (PR 1.39, CI: 1.05, 1.84). Furthermore, compared with heavy drinkers, light to moderate drinkers of alcohol at T-1 may have a lower risk of major depression (PR 0.72, CI: 0.53, 0.99).
## 4. Discussion
This study is the first population-based longitudinal study on the bi-directional comorbid association between major depression and back pain in adults in the United States. The findings of this study show that major depression is likely to be prospectively associated with back pain, and that back pain is linked to subsequent major depression. This study provides evidence to support the bidirectional association between these two disabling disorders and is consistent with the findings in the prior study by Hurwitz et al. [ 7]. This study also readdresses several controversial research issues in terms of hypotheses, study population, measurement, and data analysis in the understanding of the bi-directional associations [3,4,5,6,7,8].
Using data from a national sample of the U.S. population, this study shows the prevalence of major depression and back pain in the U.S. general population. This study shows an increased prevalence of major depression in people reporting back pain ($17.4\%$) when compared to study subjects without back pain ($10.5\%$). At the same time, the prevalence of back pain in study subjects with major depression ($37.4\%$) was higher than those without major depression ($8.5\%$). This finding is consistent with studies conducted in South Korea and Qatar. In the study conducted in South Korea on patients with depressive symptoms, $20.3\%$ reported chronic low back pain, which was much higher than the prevalence of $4.5\%$ of the general population reporting low back pain [13]. The study in Qatar [10] reported a similar pattern, with $13.7\%$ of the general population with depression complaining of low back pain compared to $8.5\%$ in the general population. In this study, male subjects with chronic low back pain reported a higher prevalence of depression compared to the general population. ( $32\%$ vs. $16\%$) [18].
One strength of the current study is the instrument used for assessing major depression, the Composite International Diagnostic Interview Short Form (CIDI-SF) [19]. This instrument is considered to have satisfactory reliability and internal consistency [17]. Another strength of the current study was the longitudinal design, which made it possible to explore the impact of major depression as a precursor of back pain compared with a cohort of participants free of major depression, and vice versa.
A main limitation of this study may be attributed to the general goal of the MIDUS, which was not designed for assessing the association between major depression and back pain. The second limitation may be related to the definition of back pain based on the MIDUS question on “backache”. Although it is not clearly defined as conventional “low back pain”, it may imply any spinal pain inferior to the neck, and therefore, could be thoracic pain and could be operationalized as low back pain. The third related limitation is the long follow–up period of 9 years, which was longer than several other published studies using 6- to 12-month follow-up periods [5,7]. With a 9-year follow-up period, different changes in low back pain and depression may be missed. This study sample also had a disproportionately high proportion of Non-Hispanic White racial population, which may limit its generalizability.
Understanding the mechanism of bidirectional association between chronic pain and depression may come from insights provided by recent brain imaging research. Chronic pain and depression appear to have a common neuroplasticity mechanism, which could explain their bidirectional relationship [4,20,21,22]. On the other hand, the bidirectional associations could be explained by shared environmental, clinical, psychosocial, or other factors for back pain and depression [7]. Job strain, a workplace psychosocial factor, has also been linked to both low back pain and major depression [23,24]. However, we did not control for the possible environmental, clinical, or psychosocial factors as confounders. These confounders may be common to both depression and back pain, and they might explain the findings. However, exploring these confounders is beyond the scope of our current study.
This study indicates back pain and depression are not isolated conditions. Understanding the comorbid and bidirectional associations between chronic pain and depression is important, as it may have implications for the management of patients with both depression and low back pain [25,26].
Since both these disorders cause high levels of disability and may be causally related in a bidirectional manner, it would perhaps be of value to assess and manage patients presenting with depression by enquiring about back pain (and vice versa), and addressing those complaints at the same time, rather than considering the management as isolated health concerns. Future population-based longitudinal studies in large scale are needed to explore factors related to the onset, progression, and reoccurrence of low back pain and major depression, as well as psychosocial, behavioral, and other factors that may impact bidirectional comorbid associations.
## 5. Conclusions
This study indicated low back pain and depression are not isolated conditions and that they have a prospective bidirectional association. This study fills a gap in the field and may have implications for the management and prevention of disability associated with both depression and low back pain. Future population-based longitudinal studies in large scale are needed to explore factors related to temporal precedence, onset, progression, and reoccurrence of low back pain and major depression, as well as psychosocial, behavioral, and other factors that may impact bi-directional associations. |
# Effects of PM2.5 Exposure on the ACE/ACE2 Pathway: Possible Implication in COVID-19 Pandemic
## Abstract
Particulate matter (PM) is a harmful component of urban air pollution and PM2.5, in particular, can settle in the deep airways. The RAS system plays a crucial role in the pathogenesis of pollution-induced inflammatory diseases: the ACE/AngII/AT1 axis activates a pro-inflammatory pathway counteracted by the ACE2/Ang[1-7]/MAS axis, which in turn triggers an anti-inflammatory and protective pathway. However, ACE2 acts also as a receptor through which SARS-CoV-2 penetrates host cells to replicate. COX-2, HO-1, and iNOS are other crucial proteins involved in ultrafine particles (UFP)-induced inflammation and oxidative stress, but closely related to the course of the COVID-19 disease. BALB/c male mice were subjected to PM2.5 sub-acute exposure to study its effects on ACE2 and ACE, COX-2, HO-1 and iNOS proteins levels, in the main organs concerned with the pathogenesis of COVID-19. The results obtained show that sub-acute exposure to PM2.5 induces organ-specific modifications which might predispose to greater susceptibility to severe symptomatology in the case of SARS-CoV-2 infection. The novelty of this work consists in using a molecular study, carried out in the lung but also in the main organs involved in the disease, to analyze the close relationship between exposure to pollution and the pathogenesis of COVID-19.
## 1. Introduction
Particulate matter (PM), as a major component of air pollutants, contains a complex mixture of smoke, dust, and other solid particles, as well as liquid droplets, present in the air [1].
PM differs in size, shape, and chemical composition. Among the various methods of PM classification, the aerodynamic diameter is certainly the one that best defines its property of being transported in the atmosphere and its ability to be inhaled. Based on this parameter, PM is categorized into three classes: coarse particles or PM10 (ranging from 2.5 to 10 µm); fine particles or PM2.5 (smaller than 2.5 µm); ultrafine particles or PM0.1 (UFP, smaller than 0.1 µm) [2,3].
While larger particles show greater fractional deposition in the extra-thoracic and upper tracheobronchial regions, smaller particles (e.g., PM2.5) are mostly deposited in the deep lung [4]. Direct effects may occur via agents that are able to cross the pulmonary epithelium into the circulation, such as possibly soluble constituents of PM2.5 (e.g., transition metals and polycyclic aromatic hydrocarbons, PAHs) [4,5,6,7,8,9].
This subsequently may contribute to a systemic inflammatory state via increased oxidative stress, potentially leading to increased health risk [9].
It is well known that pollution impairs the first line of upper airway defense [10]; thus, people living in an area with high levels of pollutants are more prone to develop chronic respiratory conditions and susceptible to any infective agent [11].
SARS-CoV-2 (Severe Acute Respiratory Syndrome Corona Virus 2) [12] is the pathogen of COVID-19. This disease was first reported in December 2019 in Wuhan, Hubei Province, China, and then spread worldwide. The course of the disease is usually mild, but in many cases, it may require hospitalization, and degenerate into acute respiratory distress syndrome (ARDS) leading even to death.
A significant overlap was observed between increased mortality and morbidity and pollution levels.
Based on this correlation, many epidemiological studies, summarized in an exhaustive review by Marquès and Domingo [13], have investigated a possible relationship between the high level of SARS-CoV-2 lethality and atmospheric pollution.
Lombardy is one of the Italian regions with the highest level of virus lethality in the world and one of Europe’s most polluted areas [11].
Samples of PM2.5 gravimetrically collected during the winter of 2008 in the urban area of Milan (North Italy) were chemically characterized based on the potential toxicological relevance of its components. Milan winter PM2.5 contains high concentrations of pro-oxidant transition metals and PAHs and is mainly composed of particles ranging in size from 40 nm to 300 nm. Although the chemical composition is similar to that of other European cities, the annual levels of PM2.5 in Milan are higher [6].
PM2.5 induces an inflammation state with consequent production of cytokines that activate the pathways mediated by MAPK or JAK-STAT3, which in turn modulate the expression of matrix metalloproteinases (MMPs).
Phosphorylated STAT3 and phosphorylated ERK act as transcription factors at the nuclear level by increasing inflammation and MMPs expression.
MMPs are zinc-dependent endopeptidases that are capable of degrading the matrix, but also perform other functions, such as activation or inactivation of chemokines/cytokines, and are involved in inflammation [14,15].
The relationships between PM2.5 and inflammation have been mentioned in many pulmonary diseases, such as acute lung injury (ALI), asthma, and chronic obstructive pulmonary disease (COPD) [16,17,18]. Lin and collaborators [19] demonstrated, using a mouse model, that PM2.5-induced ALI is regulated by the Renin-Angiotensin System (RAS) and the Angiotensin-Converting Enzyme II/angiotensin 1-7/Mas receptor (ACE2/Ang[1-7]/Mas) axis has a crucial role in the pathogenesis of lung injury.
RAS is an essential endocrine system, strongly related to the cardiopulmonary system and inflammation which, by activating inflammatory factors in the lung, participates in pulmonary injury [20,21].
ACE and ACE2 are enzymes expressed in various organs, and are two key enzymes of RAS, generating two pathways with opposite effects [22].
In the Angiotensin-Converting Enzyme/AngiotensinII/AngII Type I Receptor (ACE/AngII/AT1R) axis, Ang II produced by ACE1 starting from Ang I interacts with the AT1 receptor, inducing the expression of IL-6, TNF-α, and TGF-β1 [23]. These cytokines activate transduction pathways involving STAT3 and ERK, and lead to increased production of MMPs and pro-inflammatory molecules. Consequently, this pathway is pro-inflammatory.
Instead, in the ACE2/Ang[1-7]/Mas axis, Ang 1-7 produced by ACE2 starting from Ang II interacts with MAS, which represses the STAT3 and ERK transduction pathway reducing the expression of MMPs and pro-inflammatory molecules [24]. Therefore, Ang 1-7 acts by inhibiting inflammatory pathways, JAK-STAT, MAPK, and NF-KB, but also activates anti-inflammatory molecules such as IL-10, and protective pathways such as NRF2, against ROS. Consequently, this pathway has an anti-inflammatory role [25].
Interestingly, the initial cell entry phase of the SARS-CoV-2 requires binding of its homo-trimeric spike glycoprotein to the membrane-bound form of angiotensin-converting enzyme 2 (ACE2) on the target cell [26,27].
Then, the relationship between exposure to PM2.5 and SARS-CoV-2 infection seems to actually converge on the RAS, and in particular on ACE2.
ACE2 acts as a cellular receptor of the virus, and the binding leads to the internalization of the complex in the target cell with consequent down-regulation of ACE2 [28].
Therefore, the imbalance of ACE2/ACE levels in COVID-19 and the dysregulated AngII/AT1R axis may partially be responsible for the cytokine storm and the resulting pulmonary damage [29,30].
The loss of the modulatory effect of Ang(1–7), obtained by its binding to the Mas receptor that attenuates inflammatory response, may be a further contributing factor in the hyper-inflammation status of severe cases of COVID-19 [31].
Our previous studies showed that UFP-induced inflammation and oxidative stress are associated with the alteration of COX-2, HO-1, and iNOS levels [32,33].
Lung and systemic inflammation are responsible for many of the severe cases of COVID-19 [34], which may ultimately cause severe respiratory failure, multi-organ dysfunction, and death [35].
The search for possible therapeutic strategies against SARS-CoV-2 is rapidly proceeding. Several potential target therapies have been proposed, including acetylsalicylic acid for its anti-inflammatory, analgesic, antipyretic, and antithrombotic effects [36].
These effects are obtained because ASA inhibits prostaglandin and thromboxane synthesis by irreversible inactivation of cyclo-oxygenase-1 (COX-1) and cyclo-oxygenase-2 (COX-2). Additional actions have been described to explain the ability of ASA to suppress inflammation, including heme oxygenase (HO) expression induction [37] and iNOS acetylation, resulting in the release of nitric oxide [38].
Based on these assumptions, here we examine, in a mouse model, the effects of PM2.5 exposures on ACE, ACE2, COX-2, HO-1, and iNOS in the main organs involved in COVID-19 pathology (lung, heart, liver, and brain), to test the potential close relationship between PM2.5 exposure and disease severity.
## 2.1. Animals
Male BALB/c mice (7–8 weeks old) were purchased from Harlan and housed in plastic cages under controlled environmental conditions (temperature 19–21 °C, humidity 40–$70\%$, lights on from 7:00 a.m. to 7:00 p.m.) where food and water were administered ad libitum. Animal use and care procedures were approved by the Institutional Animal Care and Use Committee of the University of Milano-Bicocca (protocol number: PP$\frac{10}{2008}$) and were in compliance with the guidelines set by the Italian Ministry of Health (DL $\frac{116}{92}$). Invasive procedures were performed under anesthesia, and an attempt was made to minimize animal suffering.
## 2.2. PM Sources and Characterization
Atmospheric winter PM2.5 was collected in Torre Sarca (Milano) and has already been described [6]. The details of the sampling and chemical analysis performed on PM2.5 were described by Perrone et al. [ 8,39], while the chemical composition of Milano PM2.5 was summarized in Sancini et al. [ 40].
The filter extractions were conducted by using an ultrasonic bath (Sonica®, SOLTEC, Milan, Italy), specifically developed to maximize the detachment efficiency of the fine PM. Particles were extracted from the filters in ultra-pure water with four cycles of 20 min each, then dried into a desiccator and weighed. PM2.5 aliquots were properly diluted in sterile saline, sonicated, vortexed, and immediately instilled in mice.
## 2.3. Dose
The purpose of this study is to analyze the adverse effects of exposure to PM2.5 on the different organs analyzed. For this reason, we reduced the cumulative PM2.5 dose proposed by Happo et al. [ 7] to 0.3 mg/mouse to apply the same treatment scheme used by Farina et al. [ 41] and Sancini et al. [ 40]. Indeed, this protocol allows for an increase in extrapulmonary adverse effects, the lungs being still affected.
## 2.4. Intratracheal PM2.5 Instillation
Animals were randomly divided into two experimental groups: sham (isotonic solution), and PM2.5-treated mice. Five mice for each experimental group were intratracheally instilled.
Male BALB/c mice were exposed to a mixture of $2.5\%$ isoflurane (Flurane, Merial, Toulouse) anesthetic gas and kept under anesthesia for the whole instillation procedure. Intratracheal instillations with 100 µg of PM2.5 in 100 µL of isotonic saline solution or 100 µL of isotonic saline solution (sham) were administered through a MicroSprayer® Aerosolizer system (MicroSprayer® Aerosolizer- Model IA-1C and FMJ-250 High-Pressure Syringe, Penn Century, Wyndmoor, PA, USA), as previously described [42,43,44].
The intratracheal instillation was performed for a total of three instillations on days 0, 3, 6, and 24 h after the last instillation, mice were euthanized with an anesthetic mixture overdose (Tiletamine/Zolazepam-Xylazine and isoflurane), (Figure 1).
## 2.5. Organ Homogenization
Organs (lung, heart, liver, and brain) of sham and PM2.5-treated mice, after being excised quickly, were washed in ice-cold isotonic saline solution, minced, and suspended in $0.9\%$ NaCl plus protease inhibitors cocktail (Complete, Roche Diagnostics S.p. A Milano, Monza, Italy). The samples were then homogenized for 30 s at 11,000 rpm with Ultra-Turrax T25 basic (IKA WERKE) and sonicated for 30 s. All the above procedures were performed on ice. The samples were stored at −20 °C for subsequent biochemical analyses.
## 2.6. Electrophoresis and Immunoblotting
Lung, heart, liver, and brain homogenates of sham and PM2.5-treated mice were analyzed for protein content by quantification with a micro-bicinchoninic acid (BCA) assay (Sigma-Aldrich Cat# B9643, Cat# C2284 St. Louis, MO, USA); then, 30 µg of total proteins for each sample were subjected to SDS-PAGE ($10\%$) followed by Western blot.
Protein analysis was assessed with specific antibodies: ACE2 (2.5 µg/mL) and ACE (0.05 µg/mL) (R&D Systems, Minneapolis, MN, USA), COX-2 (1:250) (BD Transduction Laboratories, Franklin Lakes, NJ, USA), HO-1 (1:1000) (Cell Signaling Technology, Danvers, MA, USA), iNOS (1:300) (Byorbyt, Cambridge, UK). The secondary antibodies were appropriate horseradish peroxidase (HRP)-conjugated rabbit anti-goat (1:4000) and goat anti-rabbit or anti-mouse (1:5000) (Thermofisher Scientific, Milano, Italy).
Immunoreactive proteins were revealed by enhanced chemiluminescence (ECL SuperSignal detection kit, Thermofisher Scientific, Milano, Italy) and semi-quantitative analysis was estimated by ImageQuant™ 800 (GE Healthcare Life Sciences, Milan, Italy), program 1D gel analysis. No blinding was performed.
Staining of total proteins versus a housekeeping protein represents the actual amount of loading more accurately due to minor procedural and biological variations, as demonstrated by recent studies [45,46]. Accordingly, samples were normalized with respect to the total amount of proteins detected by Ponceau staining, allowing a straightforward correction for lane-to-lane variation [45,47]. Each protein was then expressed as a percentage of the sham, which represents the control.
## 2.7. Statistical Analysis
For each parameter measured in sham and PM2.5-treated mice, the means (±standard error of the mean, S.E.) were calculated.
Statistical differences were tested by one-way ANOVA and t-test and were considered significant at the $95\%$ level (p-value < 0.05).
## 3. Results
In this project, we analyzed the effects of PM2.5 sub-acute administrations on mouse lungs, heart, liver, and brain, evaluating their possible implications in COVID pathology.
In 2018, Lin and collaborators hypothesized that acute lung injury (ALI) induced by PM2.5 was regulated by RAS, with a crucial role for the ACE2/Ang[1-7]/MAS axis in the pathogenesis of the damage. In fact, the atmospheric particulate, through the activation of pro-inflammatory pathways, is implicated in different respiratory and cardiovascular diseases, and the RAS system is strongly related to the cardiopulmonary system and inflammation.
To test this hypothesis, they studied the ACE2 expression in the lung tissue of mouse models of ALI induced by PM2.5 and found a significant up-regulation of this protein. In addition, following the ACE2 knockdown, they observed an increase in lung levels of p-STAT3 and p-ERK$\frac{1}{2}$ as well as a reduction in injury recovery and tissue remodeling. These results confirm that ACE2 is closely involved in the pathogenesis of PM2.5-induced ALI, playing a protective role [19]. The increase of ACE2 in the lung, after PM2.5 exposure, was confirmed by [48].
However, the ACE2 protein, besides counter-regulating the inflammatory effects triggered by PM exposure acting as an organ-protective factor, is also the main receptor of SARS-CoV-2, the virus responsible for the COVID-19 pandemic [49].
The dual function of ACE2, together with the overlapping between the geographic distribution of COVID-19 outbreaks and high local pollution levels, led to the hypothesis of a correlation between the PM concentration, viral infection susceptibility, and severity of symptoms [50,51].
Induction of inflammation and oxidative stress was observed in mice exposed to UFP, resulting in increased COX-2, HO-1, and iNOS, not only in the lung and heart [32] but also in the cerebellum and hippocampus [33].
Interestingly, as recently demonstrated, these proteins have been related to the pathogenesis of COVID-19, once again suggesting a close relationship between air pollution and SARS-CoV-2 infection [52,53,54].
Based on this evidence, we analyzed the ACE/ACE2 protein pathway and COX-2, HO-1, and iNOS protein levels in a mouse model after sub-acute PM2.5 exposure, in order to evaluate a possible molecular correlation between air pollution and susceptibility to SARS-CoV-2 infection.
This study was performed in the lungs and other organs involved in the COVID-19 syndrome, such as the heart, liver, and brain. Although it is known that SARS-CoV-2 infection causes respiratory disease, it also induces adverse effects at the extrapulmonary level.
The effects of PM2.5 exposure vary according to the organs analyzed.
In the lung of PM2.5-treated mice, the levels of ACE2 (+ $40\%$) and COX-2 (+ $40\%$) increased compared to the sham (Figure 2).
In the heart tissue, PM2.5 treatment induced a significant decrease in ACE and ACE2 (−$34\%$ and −$27\%$ respectively) while showing a significant increase in COX-2 level (+$21\%$) and HO-1 (+$60\%$), compared to the sham (Figure 3).
PM2.5-treated mice showed increased levels of ACE (+$80\%$) in the liver, compared to sham (Figure 4), resulting in a significant change in the ACE/ACE2 ratio (+$83\%$) (Table 1).
In the brain, as well as in the liver, PM2.5-treated mice showed increased levels of ACE (+$39\%$), compared to sham (Figure 5), resulting in a significant change in the ACE/ACE2 ratio (+$40\%$) (Table 1).
All the other investigated biomarkers were not affected by PM2.5 repeated instillations, in all the organs considered (Figure 2, Figure 3, Figure 4 and Figure 5).
## 4. Discussion
Several studies have shown that the ACE2/Ang[1-7]/MAS axis is critically involved in lung pathophysiological processes. It can antagonize the negative effects mediated by RAS or the ACE/AngII /AT1 axis, such as lung inflammation, fibrosis, pulmonary arterial hypertension, and apoptosis of alveolar epithelial cells, suggesting an anti-inflammatory and organ-protective role of the ACE2 protein which, however, is also the receptor of SARS-CoV-2 [55].
Therefore, the significant ACE2 increase observed in the lungs after PM2.5 sub-acute exposure might favor SARS-CoV-2 pulmonary entry in case of infection.
Furthermore, many inflammatory and oxidative stress mediators are known to be impaired in COVID-19 and are associated with multi-organ damage and poor disease prognosis [56,57].
Following sub-acute exposure to PM2.5, COX-2 increased, indicating an inflammatory state that a possible infection could exacerbate.
The COX-2 up-regulation is typical of viral infections and COVID-19. In particular, SARS-CoV-2 acute respiratory syndrome induces severe tissue damage by releasing “cellular debris”. Both primary infection and accumulation of cellular debris initiate the stress response of the endoplasmic reticulum and increase the regulation of inflammatory enzymes, including microsomal prostaglandin E synthase-1 (mPGES-1) and prostaglandin-endoperoxide synthase 2 (also known such as COX-2), which subsequently produce eicosanoids: prostaglandins (PG), leukotrienes (LT) and thromboxanes (TX). These pro-inflammatory lipids, named autacoids, trigger the cytokine storm, which mediates the widespread inflammation and organ damage found in patients with severe COVID-19 [58].
Instead, subacute treatment with PM2.5 does not induce changes in HO-1 and iNOS levels in the lungs. These data are in agreement with previous in vivo work [40], in which the huge amount of polycyclic aromatic hydrocarbons (PAHs) characterizing the PM2.5 samples increased lung cytochrome expression, in particular the Cyp1A1 and Cyp1B1, responsible for the metabolism of PAHs. However, PAH metabolism within the lungs did not promote an increase in HO-1 levels.
It is possible to speculate that the treatment with PM2.5 in the lung mainly involves the alveolar-capillary barrier. The increase in vascular permeability following endothelial activation would facilitate the translocation of fine particles from the lungs into the bloodstream.
Concerning the heart, several studies have highlighted the ACE2/Ang[1-7]/MAS axis cardioprotective effect against the damage generated by the ACE/AngII/AT1 axis [59]. Ferreira et al. [ 2001] observed, for the first time, that the activity of Ang[1-7], produced by ACE2, induced a significant reduction in cardiac arrhythmias related to ischemia/reperfusion (anti-arrhythmogenic effect) beyond a post-ischemic heart function improvement. Subsequent studies have highlighted the ability of Ang[1-7] to attenuate cardiac hypertrophy, suggesting an anti-hypertrophic role [55]. Consequently, the significant decrease in ACE2 observed following PM2.5 sub-acute exposure might be related to ACE reduction. Therefore, the ACE level reduction, inducing a decrease in AngII, would make ACE2 “less necessary” but might lead to increased inflammation and impaired heart function, predisposing to greater severity in cases of SARS-CoV-2 infection [27].
As in the lung, COX-2 level significantly increases in treated mice, compared to sham, following PM2.5 sub-acute exposure. This result suggests a highly compromised inflammatory contest in the heart that could degenerate in case of SARS-CoV-2 infection.
Furthermore, in the heart following sub-acute exposure to PM2.5, HO-1 increased, as noted in our previous work [40], in an attempt probably at a protective response.
Numerous studies have reported the beneficial effects of the ACE2/Ang1-7/MAS axis in counteracting steatosis and non-alcoholic inflammation, liver fibrosis, and insulin sensitivity in the liver. These observations are in agreement with the increase in ACE2 observed in chronic liver lesions in animal and human models. Furthermore, Ang[1-7] is known to suppress the growth of hepatocellular carcinoma and angiogenesis [55].
Lubel et al. [ 60] demonstrated that patients with liver cirrhosis showed remarkably high concentration levels of both plasma Ang[1-7] and AngII. In cirrhotic rat liver, Ang[1-7] significantly inhibited the vasoconstriction induced by intrahepatic AngII, through the NO signaling pathways dependent on eNOS and guanylate cyclase.
Sub-acute exposure to PM2.5, instead, induces an increase in ACE but not ACE2, showing that exposure to PM2.5 in the liver does not activate the anti-inflammatory ACE2 /Ang[1-7]/MAS axis to counteract the increased ACE. This event causes a significant increase in the ACE/ACE2 ratio and, consequently, in the pro-inflammatory pathways, indicating an inflammatory state that could be exacerbated by possible infection. No significant changes in HO-1 and iNOS were observed in the liver.
Finally, ACE2 is present in the brain, predominantly in neurons [61]. In an interesting review, the physiological aspects of the ACE2/Ang[1-7]/MAS axis in different organs were analysed, and in particular the role of Ang[1-7] in the brain. ACE2 appears to be essential in the central nervous system for cardiovascular regulation. Indeed, transgenic mice overexpressing the synapsin promoter-driven human ACE2 exhibit protective phenotypes for cardiovascular disease. This suggests that the balance between Ang[1-7] and AngII in brain regions, which regulates the autonomic nervous system, is critical [52].
The increase in ACE levels, observed following sub-acute exposure to PM2.5, causes a significant increase in the ACE/ACE2 ratio. Alteration of the balance between Ang[1-7] and AngII indicates a compromised situation in the brain following exposure to PM2.5, which can degenerate in the case of SARS-CoV-2 infection, with serious outcomes also at the heart level.
Furthermore, the slight decrease in HO-1 observed in the brain suggests a lower countering power against the inflammatory cascade in the case of SARS-CoV-2 infection [62], since HO-1 exerts a powerful antioxidant effect degrading heme, a pro-inflammatory mediator. Indeed, a lower expression of HO-1, due to different polymorphisms, has been associated with greater difficulty in counteracting SARS-CoV-2-induced inflammation [63].
## 5. Conclusions
Sub-acute exposure to PM2.5 causes alterations in the ACE/ACE2 system, with possible consequences on COVID-19 pathogenesis.
In an in vivo model of male BALB/c mice, PM2.5 exposure causes variations in the ACE2 and/or ACE levels in all the organs considered.
It is known that ACE2 can counteract the pro-inflammatory pathways activated by ACE, acting as an organ-protective factor, but also acts as a receptor for the entry of SARS-CoV-2 into host cells in case of infection. An alteration of the ACE/ACE2 ratio, when in favor of ACE, suggests a greater probability of manifesting severe symptoms under infection due to the pro-inflammatory pathways’ enhancement. In contrast, a condition favoring ACE2 increase can involve greater susceptibility to SARS-CoV-2 entry in case of contact with the virus.
Therefore, exposure to PM2.5 causes organ-specific changes in the ACE/ACE2 pathway. In all the organs analyzed, HO-1 and iNOS never undergo significant changes, except in the heart, where an increase in HO-1 was observed in agreement with our previous work [40], although we observed a significant COX-2 increase in the lungs and heart, and a considerable increment in the brain.
However, COX-2 plays a central role in viral infections. It is known that SARS-CoV-2 induces the over-expression of COX-2 in human cell cultures and mouse systems [64] and that it could be involved in regulating lung inflammation and disease severity.
In the concept of “risk stratification,” living in a polluted environment can significantly increase the possibility of developing a severe form of COVID-19, especially in individuals with predisposing risk factors (diseases, lifestyle, genetics). This concept could at least partially explain the greater lethality of the virus observed in highly polluted areas, including Lombardy.
The novelty of this work is the use of a molecular approach on an in vivo and non-epidemiological model carried out not only at the pulmonary level but also in the primary organs involved in the disease, in order to analyze the close relationship between pollution exposure and the pathogenesis of COVID-19.
It could be interesting to repeat the analyses following exposure to UFP which, given their aerodynamic diameter of less than 100 nm, have greater penetration and higher translocation rates with possibly worse toxicity profiles.
In our opinion,. experimental studies evaluating the role of air pollution in specific populations are urgently needed for a deeper understanding of the mechanisms leading to a worse prognosis. |
# Public Support for Nutrition-Related Actions by Food Companies in Australia: A Cross-Sectional Analysis of Findings from the 2020 International Food Policy Study
## Abstract
Unhealthy food environments contribute to unhealthy population diets. In Australia, the government currently relies on voluntary food company actions (e.g., related to front-of-pack labelling, restricting promotion of unhealthy foods, and product formulation) as part of their efforts to improve population diets, despite evidence that such voluntary approaches are less effective than mandatory policies. This study aimed to understand public perceptions of potential food industry nutrition-related actions in Australia. An online survey was completed by 4289 Australians in 2020 as part of the International Food Policy Study. The level of public support was assessed for six different nutrition-related actions related to food labelling, food promotion, and product formulation. High levels of support were observed for all six company actions, with the highest support observed for displaying the Health Star Rating on all products ($80.4\%$) and restricting children’s exposure to online promotion of unhealthy food ($76.8\%$). Findings suggest the Australian public is strongly supportive of food companies taking action to improve nutrition and the healthiness of food environments. However, given the limitations of the voluntary action from food companies, mandatory policy action by the Australian government is likely to be needed to ensure company practices align with public expectations.
## 1. Introduction
Unhealthy diets are a key risk factor for non-communicable diseases (NCDs) and a global health priority [1]. It is widely accepted that food environments have a major influence on dietary intake [2,3]. In Australia, food environments generally do not promote healthy eating [4,5,6,7], with “discretionary” foods that are high in energy, sugar, salt and/or saturated fat widely available and heavily promoted [7]. The supply and marketing of discretionary food in *Australia is* led by a relatively small number of large food companies with substantial market power [7,8]. These food companies use a wide range of strategies to influence consumers as part of integrated marketing campaigns, including: traditional and digital marketing tactics (e.g., television and outdoor advertisements, social media and gamification) [9]; retail-based promotion (e.g., price promotions, positioning and shelf space) [8]; and on-package marketing (e.g., cartoon characters and health claims) [10,11].
There have been consistent calls for government-led policy action to improve the healthiness of food environments as part of efforts to address unhealthy diets [2,3,12]. Some countries have implemented a suite of mandatory food-related policies including: restricting exposure of children to marketing of unhealthy food [13]; providing front-of-pack nutrition labelling [14]; and increasing the prices of unhealthy foods (e.g., taxes on sugary drinks) [15]. In contrast, the Australian government’s policy response to unhealthy diets falls far short of global benchmarks [16]. Currently, Australia’s nutrition-related policies rely heavily on voluntary action by food companies, including the voluntary Health Star Rating (HSR) front-of-pack nutrition labelling system [17], industry codes for adult and children’s marketing guidelines [18], and the Healthy Food Partnership Reformulation Program [19]. The lack of mandatory action has been attributed to multiple factors, including food industry lobbying to limit regulations that may harm their profits, and the prioritisation of economic wealth over public health [20,21,22,23]. Reliance on voluntary action has for the most part been shown to be ineffective, with limited uptake of such policies by food companies coupled with weak or incomplete implementation where there is uptake [24,25,26]. A 2018 assessment of Australian food company nutrition-related policies and commitments found that most companies fell short of global recommendations [27].
In the absence of government regulation, pressure on food companies from external stakeholders such as the general public and investors can lead to increased implementation of nutrition-related actions (e.g., via corporate sustainability strategies) [28,29,30]. An understanding of the extent of public support for food company action is an important advocacy tool to inform strategies to influence food industry efforts to improve the healthiness of Australian food environments. Public expectations of food companies can also guide government policy development [31].
Previous research has found that public support for various nutrition-related policies differs between countries, due to factors such as differing cultural norms, political ideology, and stage of implementation [32,33]. Research examining public support for nutrition-related policies in Australia has largely focused on support for government-led policy solutions [34,35], with limited research focused on public perceptions related to food company action [36,37,38]. Two previous studies investigated public perceptions of unhealthy food sponsorship at community events and in community sport [37,38]; and one study investigated the perceived responsibility of food companies to address population health outcomes, generally [36]. While these studies found strong support for increased food company action to improve population diets, they were very limited in the scope of the nutrition-related actions they explored. To contribute to addressing this knowledge gap, this study aimed to understand public support for food company actions targeting front-of-pack nutrition labelling, exposure of children to marketing of unhealthy foods and product reformulation in Australia, and how the level of support varied by socio-demographic factors.
## 2.1. Study Design and Sampling
Data are from the 2020 International Food Policy Study (IFPS), an online annual repeat cross-sectional survey conducted across five countries: Australia, Mexico, Canada, the USA, and the UK [39]. The current study used data collected between November and December 2020 from respondents in Australia.
Participants aged 18 to 100 residing in Australia were recruited through Nielsen Consumer Insights Global Panel and their partners’ panels, using non-probability sampling methods. Email invitations were sent to a random sample of eligible panellists. Participants provided informed consent prior to survey completion. Participants received remuneration in line with the panels’ existing incentive structure (e.g., points-based or monetary) [40]. The study received ethics clearance through a University of Waterloo Research Ethics Committee (ORE# 30829). Deakin University Human Research Ethics Committee provided an ethics exemption in 2018. A full description of the study methodology has been published elsewhere [40].
## 2.2.1. Support for Food Company Action
Public support was assessed for six actions food companies can take to improve the overall healthiness of the food supply, as outlined in Table 1. The set of actions was derived from global, nutrition-related recommendations for food companies [27]. Respondents were randomly selected to answer only one of the six questions to reduce overall survey length and response fatigue. Support was measured by asking respondents, “Please tell us whether you agree or disagree with the following statement”. A 5-point Likert scale was used to assess support including “strongly agree”, “agree”, “neutral”, “disagree” and “strongly disagree”. Each question also had a “refuse to answer” and “don’t know” option.
## 2.2.2. Sociodemographic Variables
Self-reported demographic variables included age group (18–29, 30–44, 45–59, 60+ years), sex, education, body mass index (BMI), household income, whether respondents had children, and the respondents’ food shopping responsibility. Education was categorised into three levels; “low” (year 12 or lower), “medium” (trade certificate or diploma) and “high” (bachelor’s degree or above). BMI was calculated using self-reported height and weight and was categorised according to World Health Organization classification [41]. Household income was reported in ranges of AUD 10,000 from “Less than AUD 10,000” to “AUD 150,000 and over”. Equivalised household income was calculated using the OECD-modified equivalence scale [42]. This scale is used by the Australian Bureau of Statistics to adjust for economies that occur from sharing resources within households, allowing for more meaningful comparisons of household income [43]. The equivalisation scale assigns a value of 1 to the household head, 0.5 to each additional adult and 0.3 to each child [42]. The categorical data collected for income were assigned a value in the middle of each income range (e.g., AUD 20,000–30,000 became AUD 25,000). The OECD-modified equivalence scale was applied to this value to determine an estimated equivalised household income. Income was then recategorized into low, medium, and high tertiles. Variables representing socio-demographic characteristics were selected for inclusion in regression models a priori based on being both assessed in the IFPS study and known to influence diet-related behaviours [32,44,45].
The extent of food shopping responsibility was categorised as “most”, “shared equally”, “some, but less than others” and “none”. Dietary health was categorised as “poor”, “fair”, “good”, “very good” and “excellent”. Each variable also had “refuse to answer” and “do not know” options.
## 2.3. Data Management and Analysis
A total of 5500 respondents completed the survey. Respondents were excluded for the following reasons: invalid response to a data quality question; survey completion time under 15 min; and/or invalid responses to at least 3 of 21 open-ended measures ($$n = 1211$$), leaving an analytic sample of 4289 respondents. Participants with missing results for the sociodemographic variables were included in the descriptive analysis, but were excluded in the logistic regression models that included these variables. Missing data, “refuse to answer”, and “do not know” responses were excluded from analysis. Data were weighted using post-stratification sample weights constructed using a raking algorithm with population estimates based on age, sex at birth, region, ethnicity, and education [40]. Estimates reported are weighted. Analyses were conducted using Stata/BE-17 [46].
Explanatory variables used in the models included age, sex, BMI, education, equivalised household income, shopping role, guardian/parental status, and health of diet. These were chosen as covariates based on the existing literature [34,44].
Additional sensitivity analysis was undertaken to determine best fit of the model through exploratory univariate logistic regression modelling for each covariate [47]. To determine the impact of “neutral” responses, a separate multivariable logistic regression analysis was conducted on all outcome measures, excluding “neutral” responses. The results from this analysis were similar to the final model that included the “neutral” response option. The final model was tested for goodness of fit using the Hosmer–Lemeshow test [47]. Due to the number of response options being tested, the significance level was set at the 0.01 level.
## 3.1. Sample Characteristics
The weighted sociodemographic characteristics of respondents are detailed in Table 2. The mean age of respondents was 46.6 years (min 18–max 92) and there was an approximately equal proportion of male and female respondents. The majority of respondents reported low to medium education levels, having no children, doing most of the food shopping in their household and rated their overall diet quality as “good” to “excellent”.
## 3.2. Support for Food Company Action
The proportion of respondents who supported the various food company nutrition-related actions is detailed in Figure 1. There was more than $60\%$ support for all actions, with the highest level of support for food companies displaying the Health Star Rating on packaging of all food and drinks ($80.4\%$). The lowest support was for food companies not placing “cartoon characters or other images that appeal to children on product packaging for unhealthy food and drinks” ($61.6\%$) and only making “nutrition claims on products that are healthy overall” ($61.9\%$). Across all food company actions, the proportion of participants who opposed the actions was low ($2.0\%$ to $10.1\%$), while the proportion of participants reporting a neutral response ranged from $15.4\%$ to $29.6\%$.
## 3.3. Support for Food Company Actions by Sociodemographic Characteristics
Results from the multivariable logistic regression model fitted to examine associations between sociodemographic characteristics and level of support for voluntary food company action are detailed in Table 3. Overall, age was a significant covariate for three of the six initiatives. Respondents aged over 60 years old were more than twice as likely than 18–29 year-olds to support food companies “not placing cartoon characters or other images that appeal to children on product packaging for unhealthy food and drinks”, and “not advertising unhealthy food and drinks on TV at times when children and teenagers are likely to be watching”. Those aged above 60 years were more than three times as likely than 18–29 year olds to support food companies “not targeting children and teenagers with online ads for unhealthy food and drinks”. No significant differences in support were found for any other age groups.
Females were almost twice as likely as males to report support for not targeting “children and teenagers with online ads for unhealthy food and drinks”. Sex was not significantly associated with support for any other initiative. Respondents with bachelor’s degrees or above were more than twice as likely to support food companies not targeting “children and teenagers with online ads for unhealthy food and drinks” compared to respondents with low education levels.
No significant associations were found between categories of household income, BMI, parental status, shopping responsibility, and the overall health of diet and level of support for any initiative. For three food company initiatives (that food companies “have a responsibility to make food and drinks healthier for consumers”, “should clearly display the Health Star Rating on the packaging of ALL food and drinks” and “should only make nutrition claims on products that are healthy overall”), no significant associations were found between any sociodemographic variables or BMI and level of support.
## 4. Discussion
This study found strong public support for food companies to take action to improve the healthiness of Australian food environments. The highest level of support was observed for displaying the Health Star Rating on all products, restricting exposure of children to promotion of unhealthy food online, and manufacturing healthier food and drinks. Support for restricting other types of marketing of unhealthy products to children and the responsible use of nutrition claims was also high.
Public support for voluntary nutrition-related action by food companies in this study was generally consistent with findings related to the support of government regulation of food companies from previous studies in Australia and internationally [33,34,35,37,38,48]. A scoping review of 18 studies that explored Australians’ views on regulatory nutrition policies found high levels of support for implementation of interpretive front-of pack nutrition labelling, and moderate to high levels of support for restricting unhealthy food marketing to children and reformulation to improve product healthiness [35]. Likewise, an international study examining public support for nutrition interventions in seven countries, including Australia, found high support across all countries for reformulation interventions and interpretive front-of-pack nutrition labelling (e.g., Health Star Rating, Nutriscore) [48].
The strong level of support for Health Star Rating labelling corresponds with previous studies that have found support for health-related policies and actions increased after their widespread implementation [32,49]. In Australia, the Health Star Rating system was first introduced in 2014, with uptake increasing to $43\%$ of eligible products by 2021 [7]. Some studies have posited that increased acceptance of an initiative after implementation may be associated with the public observing positive impacts or not observing negative consequences [49].
The association between demographic characteristics and the extent of support for various food company nutrition-related actions was generally uniform, with some variation across the different actions. Of note, support for food companies not targeting children with online advertisements for unhealthy food and drinks was significantly higher for those over 60 years compared with 18–29 year olds. Other studies have also found that those above 60 years old were more likely to support nutrition-related policies that were similar to the ones examined in this study [33,50]. The lack of association between parental status and support for food company actions is consistent with previous research which found that parental status was not significantly associated with support for government policies focused on restricting the marketing and promotion of unhealthy food and beverages to children [37,50,51]. While previous literature has identified being female and having a higher level of education as common demographic characteristics associated with increased support for food-related interventions (i.e., sugar sweetened beverage tax, food placement, price-promotion, and restriction of unhealthy food marketing to children), the current study found no significant association between education and most nutrition-related actions [34,44,50]. The exception was a significant association between education and support for online advertising restrictions. The lack of significant differences in the results across different socioeconomic groups likely reflects the broad support for such measures across the population.
Despite this study’s findings that there is both strong public support for companies to take action to improve nutrition, and minimal public opposition to such action, voluntary uptake of globally recommended nutrition-related actions by food companies in Australia has generally been limited. The most recent report [2020] measuring uptake of the Health Star Rating system showed that, six years post-implementation, only $41\%$ of eligible products displayed the Health Star Rating [7]. Reformulation efforts have also been limited, with little change in the overall nutritional quality across all packaged food categories between 2019 and 2021, and few companies formally committing to the Healthy Food Partnership’s reformulation program [7]. There is also consistent evidence to demonstrate the inadequacy of current industry self-regulation in protecting Australian children from unhealthy food marketing online, on television, outdoors, and through sport sponsorships [52,53,54,55]. An assessment of Australia’s largest food and beverage manufacturers found there were significant opportunities to improve nutrition-related policies and practices across the sector, including those related to reformulation, nutrition labelling, and food marketing [27].
## Implications
Overall, the relatively low level of implementation of globally recommended nutrition policies by food companies likely indicates that public support for nutrition-related action is not sufficient to drive policy and practice change for the food industry as a whole. Nevertheless, there appears to be potential to capitalise on the high levels of public support for action to better advocate for change by food companies. Such advocacy is likely to prove most influential if it involves coalitions working together [3]. Due to their potential to influence the actions of public companies, including the large multi-national food companies that dominate food systems in Australia, the institutional investment community may represent a potential lever for increased action [56].
The Australian government currently relies heavily on voluntary actions to improve population diets. Not only do such policies fall short of global recommendations, over the past five years (2017–2022) little policy progress has been observed at the federal government level [16]. The recently released National Obesity Strategy (2022–2032) [57] and National Preventive Health Strategy (2021–2030) [58] have a strong focus on policies for creating healthier food environments, including in the areas of food labelling, food promotion, and food composition. Public support for food company actions in this area is an important consideration as part of policy development processes [21], with the current study indicating strong public support for greater action. A number of other countries, including the United Kingdom [59] and Chile [60], have recently implemented mandatory regulations in these areas, providing a clear pathway for action for the Australian government.
The findings from the current study provide important insight into the current perceptions of the Australian public towards nutrition-related actions by the food industry. The study’s main strength is that it drew data from a relatively large sample of Australians (with selection of participants weighted to ensure the sample closely resembled the population sociodemographics in Australia). Respondents were recruited using nonprobability-based sampling from a commercial panel, meaning that despite the national sample, the findings should not be presumed to provide nationally representative estimates [61,62]. Importantly, the survey measures did not specify whether the relevant food company action would be implemented voluntarily or in response to government legislation. As such, this study is not able to provide any indication of whether the Australian public prefers a voluntary or mandatory approach to food company nutrition-related actions [63].
## 5. Conclusions
This study found strong public support in Australia for food companies to take action to improve nutrition and the healthiness of food environments. The findings from this study support greater implementation of nutrition-related policies and initiatives focused on improving the healthiness of food products, transparent labelling practices and socially responsible marketing strategies. With the current reliance on voluntary action from food companies in Australia, mandatory policy action may be needed to ensure company practices align with public expectations. |
# Heart Rate from Progressive Volitional Cycling Test Is Associated with Endothelial Dysfunction Outcomes in Hypertensive Chilean Adults
## Abstract
Background: A progressive volitional cycling test is useful in determining exercise prescription in populations with cardiovascular and metabolic diseases. However, little is known about the association between heart rate during this test and endothelial dysfunction (EDys) parameters in hypertensive (HTN) patients. Objective: To investigate the association between EDys markers (flow-mediated dilation [FMD], pulse wave velocity of the brachial artery [PWVba], and carotid-intima media thickness [cIMT]) and heart rate during a cycling test in HTN adults. A secondary aim was to characterize cardiovascular, anthropometric, and body composition outcomes in this population. Methods: This was a descriptive clinical study in which adults (men and women) were assigned to one of three groups: HTN, elevated blood pressure (Ele), or a normotensive control group (CG), and completed a progressive cycling test. The primary outcomes were FMD, PWVba, cIMT, and heart rate (HR) at 25–50 watts (HR25–50), 50–100 watts (HR50–100), and 75–150 watts (HR75–150) of the Astrand test. Secondary outcomes included body mass index (BMI), waist circumference, body fat percentage (BF%), skeletal muscle mass (SMM), resting metabolic rate (RMR), and estimated body age, as measured by a bio-impedance digital scale. Results: Analyses of the associations between FMD, PWV, and HR25–50, HR50–100, and HR75–150 watts revealed no significant association in the HTN, Ele, and CG groups. However, a significant association was found between cIMT and HR75–150 watts in the HTN group (R2 47.1, β −0.650, $$p \leq 0.038$$). There was also a significant trend ($$p \leq 0.047$$) towards increasing PWVba in the CG, Ele, and HTN groups. Conclusion: Heart rate during a progressive cycling test is associated with the EDys parameters cIMT in HTN patients, with particularly strong predictive capacity for vascular parameters in the second and third stages of the Astrand exercise test compared to normotensive control.
## 1. Introduction
Atherosclerosis is a chronic disease characterized by the accumulation of lipoproteins in the inner layer of artery walls. This accumulation is often due to oxidative damage to low-density lipoprotein (LDL-c) [1]. The accumulation of LDL-c can lead to inflammation in the major arteries (e.g., carotid and brachial arteries), which typically progress to fibroatheromas [2]. However, before the development of atherosclerosis, an endothelial dysfunction (EDys) state is usually found. EDys is a phenotypic condition that is an intermediate pathology, characterized by a pro-thrombotic and pro-inflammatory state. This is the result of an imbalance between the actions of vasodilators and vasoconstrictors, which modifies the “function” and “structure” of the vasculature [3]. Traditional methods for detecting EDys are highly invasive, expensive, and time-consuming, such as coronary epicardial vasoreactivity and venous occlusion plethysmography. Therefore, non-invasive methods based on ultrasound imaging have been rapidly implemented in clinical management [4,5].
EDys is often associated with several health conditions, including arterial hypertension (HTN), obesity, coronary artery disease, chronic heart failure, peripheral artery disease, diabetes, metabolic syndrome, non-alcoholic fatty liver disease, and chronic renal failure [6]. In Chile, $26.9\%$ of adults have HTN, and this prevalence is highly superior in older adults [7]. Therefore, it is estimated that an important number of adults and older adults will develop EDys, which will progress to atherosclerosis, or to an atheromatous plaque that, in turn, will increase the risk of stroke. Regarding “functional” parameters, the percentage of flow-mediated dilation (FMD) has been a more suitable and strong marker of vascular health in adults. Low values of FMD (i.e., <$6.5\%$) denote an impaired vascular function associated with cardiometabolic risk [4,8]. Furthermore, pulse wave velocity of the brachial artery (PWVba) is a recognized marker of arterial stiffness in adults. Although different values have been proposed for cardiovascular risk identification (e.g., PWVba > 18 m·s−1 [9]), the >10 (m·s−1) PWVba value is well accepted as an indicator of high cardiovascular risk [10]. On the other hand, carotid-intima media thickness (cIMT) is a well-established marker of “structural” vascular health [11]. Despite this, there is a scarcity of proposals and clear agreement about cut-off points for considering high cardiovascular risk in adults. Values of cIMT > 0.9 mm have been suggested by expert panels as part of the proposals for considering high cardiovascular risk [10].
Physical inactivity, which refers to not following the international physical activity guidelines of 300 min per week of low-moderate physical activity or at least 150 min per week of vigorous-intensity physical activity [11,12], is more prevalent in sedentary, obese, and hypertensive populations, as well as those with dyslipidemia or metabolic syndrome, and is associated with negative effects on both functional and structural vascular parameters, such as flow-mediated dilation (FMD), pulse wave velocity of the brachial artery (PWVba), and carotid intima-media thickness (cIMT) [10,12,13,14]. Several expert panels have recommended moderate-intensity continuous training (MICT) for 30–60 min per session most days of the week for individuals with elevated blood pressure or hypertension [15]. MICT has been shown to be crucial for preventing and treating hypertension [16,17], and recent evidence has highlighted the time-efficiency of high-intensity interval training [18,19] and resistance training for improving EDys in a similar manner [20].
However, before starting any exercise training program in clinical populations such as those with elevated blood pressure or HTN, it will necessary to know the baseline cardiovascular response to physical effort through a progressive exercise volitional test, such as a cycling test [16,17]. The Astrand test is a useful progressive volitional cycling test that provides information about cycling power output in watts, which increases at different levels. For example, in women, power output increases by 25 watts per level, while in men, it increases by 50 watts. Heart rate should also increase progressively at each level. [ 17,18]. Interestingly, the theoretically predicted heart rate maximum (HRpredicted) using the well-known formula (i.e., 220-age) is often overestimated or underestimated in physically inactive individuals [19]. Additionally, the use of heart rate maximum (HRmax) is poorly reported in physically inactive hypertensive populations who are generally unable to maintain a steady state at maximal intensity. Therefore, the heart rate peak (HRpeak) use, is a more easy and useful cardiovascular marker to obtain under exercise test conditions in physically inactive populations and has been widely reported for exercise prescription. This aim of this study was to assess the association between the EDys markers FMD, PWVba, and cIMT with the heart rate during a cycling test in HTN adults. A secondary aim was to characterize cardiovascular, anthropometric, and body composition outcomes in this population.
## 2.1. Participants
This preliminary descriptive study is part of an experimental randomized controlled clinical trial in which 75 adult men and women were invited to participate in an exercise training intervention and were assigned to one of three groups based on their blood pressure levels: arterial hypertension (HTN), elevated blood pressure (Ele), or a normotensive control group (CG). The study was conducted in Concepción, Chile between September 2022 and January 2023.
To determine the sample size, we used a G*Power 3.1.9.7 statistical sample size software calculator with an alpha error probability of $p \leq 0.05$ and a $95\%$ confidence interval (CI) for three groups, expecting a medium-to-large effect size. Thus, a minimum of ten subjects per group would give a statistical power of ≥$90\%$)].
The eligibility criteria for this study were as follows: (i) HTN, elevated blood pressure (controlled and on updated pharmacotherapy), or healthy normotensive; (ii) normal weight, overweight, or obese (as determined by body mass index [BMI]); (iii) normal or hyperglycaemic (T2DM, controlled and on updated pharmacotherapy); (iv) living in urban areas of the city of Concepción; and (v) the demonstrated ability to adhere to all measurements and stages of the study. Exclusion criteria included: (i) abnormal ECG; (ii) uncontrolled HTN (SBP ≥ 169 mmHg or DBP > 95 mmHg); (iii) morbid obesity (BMI ≥ 40 kg/m2); (iv) type 1 diabetes mellitus; (v) cardiovascular disease (e.g., coronary artery disease); (vi) diabetes complications such as varicose ulcers on the feet or legs, or a history of wounds, nephropathies, or muscle-skeletal disorders (e.g., osteoarthrosis); (vii) recent participation in weight loss treatment or exercise training programs (within the past 3 months); and (viii) the use of pharmacotherapy that can influence body composition.
All participants were informed about the study procedures and potential risks and benefits, and provided written consent. The study was conducted following the Declaration of Helsinki and was approved by the Ethics Committee of Universidad Andres Bello, Chile (Approval N° $\frac{026}{2022}$). The clinical trial is registered under the clinical trials.gov international scientific platform under the code NCT05710653.
In the first stage of the enrolment ($$n = 75$$), subjects were screened, and after exclusion criteria ($$n = 10$$) participants were excluded for several reasons; ([$$n = 3$$] due to bone diseases, [$$n = 3$$] due to a history of heart disease, [$$n = 3$$] because they were already enrolled in other exercise activities, and ($$n = 1$$) due to be under weight loss treatment). Thus, a total of ($$n = 65$$) subjects participated in this first stage of our clinical trial study. The final sample size was as follows per group: (HTN $$n = 18$$, Ele $$n = 22$$, and CG $$n = 21$$). The study design can be seen in (Figure 1).
## 2.2. Endothelial Dysfunction Outcomes
To the three main EDys outcomes (FMD, PWVba, and cIMT), an ultrasound imaging 7–12 MHz linear-array transducer (GE Medical Systems, Model LOGIQ-E PRO, Milwaukee, WI, USA) for non-invasive vascular measurements of the brachial and carotid arteries was used. All participants were informed about refraining from eating, exercising, consuming caffeine, or taking vasoactive drugs before the test.
## 2.2.1. Flow-Mediated Dilation
To measure FMD, each participant was positioned in a supine position and allowed to rest for 20 min. An ultrasound probe with a 60° inclination angle was then used in a longitudinal plane to explore the anterior and posterior lumen-intima interfaces at a site 1–3 cm proximal in the antecubital fossa to measure the brachial diameter and central flow velocity (pulsed Doppler) before the occlusion. The arm was abducted approximately 80° from the body and the forearm was supinated, and an adjustable mechanical metal arm precision holder with a magnetic base for a three-axis (X-Y-Z) positioning stage (EDITM, Progetti e Sviluppo, Italy) was used to standardize the position and avoid evaluator bias. Next, a blood pressure cuff was positioned on the left arm and inflated at 50 mmHg (over the SBP baseline) for 5 min. Information was recorded during this time, including (i) a baseline image that was obtained before the occlusion, (ii) a 3-min video obtained (60 s before the stopping of the occlusion that was maintained until 2 min after cuff deflation), and (iii) a final image that was taken after the occlusion. The peak artery diameter after cuff deflation was recorded, and FMD was calculated as the percentage (%) rise in peak diameter from the preceding baseline diameter and the image after deflation [21], using the following formula: FMD (%)=[(peak diameter −baseline diameter)]∗100baseline diameter The intra-session coefficient of variation has been ≤$1\%$ for the baseline diameter in our previous studies [18]. Reliability was estimated by intra-class correlation coefficients (ICC) based on four baseline measurements, with ICC values of 0.91 for the baseline diameter and ICC of 0.83 for FMD% (previously used data).
## 2.2.2. Carotid Intima-Media Thickness
To assess cIMT, we used the same ultrasound Doppler with the 7−12 MHz linear-array transducer. The participants were instructed to lie in a supine position and turn their heads slightly to the left and right. Once the carotid bulb was identified, a B-mode image was obtained for longitudinal right orientation of the common carotid artery. The scan was focused on 1 cm far from the bifurcation on the far wall of the common carotid artery. All images were recorded and analyzed offline using ultrasound software. Measurements were recorded at the end-diastolic stage, and the value for each side was obtained from the mean of three wall measurements of the cIMT [22]. A cIMT value of ≥0.9 mm was considered pathological [10], and a maximum thickness of ≥1.2 cm was indicative of pathological atherosclerosis [23].
## 2.2.3. Pulse Wave Velocity
The PWVba was measured by analyzing oscillometric pressure curves that were registered from the upper arm in the brachial artery, and the measurement was represented in (m·s−1). An electronic device with a cuff for inflation/deflation positioned on the left arm (Arteriograph, TENSIOMEDTM, HU) was used for the measurement. This equipment automatically inflates/deflates the cuff and maintains occlusion in the left arm for 5 min to complete a pre-test/post 5-min test occlusion. After the measurement, the information was analyzed by a computer program (Arteriograph Software v.1.9.9.2; TensioMed, Budapest, Hungary) and a PDF information sheet was downloaded. The algorithm used to measure blood pressure in the ArteriographTM device had been previously validated [24]. A cut-off point of PWVba > 10 (m·s−1) denotes a high arterial stiffness risk, and thus a high cardiovascular risk [10]. An example representation of FMD, PWVba, and cIMT measurements can be seen in (Figure 2).
## 2.2.4. Blood Pressure and Heart Rate at Rest
In a seated position and with at least 10 min of rest, systolic (SBP) and diastolic blood pressure (DBP) were classified by arterial hypertension (HTN), elevated blood pressure (Ele), or normotensive control condition (CG) following the last American Heart Association categorization (i.e., ‘normal’ blood pressure SBP/DBP <120/<80 mmHg, ‘Elevated’ blood pressure SBP/DBP 120–129/<80 mmHg, ‘stage 1′ HTN 130–$\frac{139}{80}$–89 mmHg, ‘stage 2′ HTN SBP/DBP ≥140/≥90 mmHg) [25]. Measurements were performed with an OMRONTM digital electronic BP monitor (model HEM 7114, Chicago, IL, USA). Two recordings were made using the electronic device with a cuff for inflation/deflation positioned on the left arm. Immediately after the BP measurement, each subject was provided with a heart rate watch monitor in the left hand (Model A370, PolarTM, Kempele, Finland), where the heart rate at rest was registered.
## 2.2.5. Progressive Volitional Cycling Test and Heart Rate during Exercise
The modified Astrand progressive volitional cycling test was used to determine both heart rate and power output in watts in each HTN, Ele, and CG participant [26,27]. During the test, the heart rate was measured in the first (HR25–50), second (HR50–100), third (HR75–150), fourth (HR100–200), and fifth (HR125–250) stages of the test progression, with different load graduations for men and women. Considering the evident differences in cycling performance from our HTN, Ele, and CG, some individuals will perform more than others in the test, thus, to ensure more robust statistical analyses with our associative regression models, we only included the first three stages of the Astrand test, particularly due to all subjects adhering to the completion of these stages. For the test, an electromagnetic cycle ergometer (model Ergoselect 200, ERGOLINETM, Lindenstrasse, Germany) was used. The heart rate was continuously monitored using a telemetric heart rate sensor (Model A370, PolarTM, Finland), where we registered the maximum heart rate of each Astrand test stage.
We used the modified Astrand progressive volitional cycling test to measure both heart rate and power output in watts for each participant in the HTN, Ele, and CG groups [26,27]. The test involved measuring the heart rate in five different stages of test progression, with different load graduations for men and women: the first (HR25–50), second (HR50–100), third (HR75–150), fourth (HR100–200), and fifth (HR125–250) stages. However, due to differences in cycling performance among participants, we only included the first three stages of the test in our statistical analyses in order to ensure more robust results in our associative regression models. We used an electromagnetic cycle ergometer (model Ergoselect 200, ERGOLINETM, Germany) to conduct the test and continuously monitored the heart rate using a telemetric heart rate sensor (Model A370, PolarTM, Finland), recording the maximum heart rate for each stage.
## 2.2.6. Anthropometric and Body Composition (Secondary Outcomes)
We measured body mass (kg), waist circumference (cm), body fat (%, kg), and skeletal muscle mass (%), as well as height (m). The first four variables were measured using a digital bio-impedance scale (OMRONTM model HEM 7114TM, Chicago, IL, USA), while height was measured using a stadiometer (SECATM, Model 214, Hamburg, Germany). Participants wore light clothing and no shoes during the measurements. We calculated body mass index (BMI) using body mass and height measurements to determine the degree of obesity according to standard criteria for normoweight, overweight, or obesity. We also recorded the basal metabolic rate and estimated body age. Table 1 presents the baseline characteristics of the study groups.
## 2.3. Statistical Analyses
Data are presented as mean with standard deviation (±SD). The Shapiro-Wilk test was used to test the normality assumption of all variables. The Wilcoxon rank sum test was used for variables that were not normally distributed. A one-way ANOVA was performed to test differences between groups, adjusted for weight, height, gender, SBP, and the use of beta-blockers. Additionally, a post-hoc Tukey’s test was applied after the ANOVA for group comparisons (HTN × Ele × CG). We also reported a trend analysis (ptrend) to test for potential (linear) tendencies to increase or decrease a particular outcome through the categories of different blood pressures. These analyses were applied using the Graph Pad Prism 8.0 software (Graph Pad Software, San Diego, CA, USA).
Finally, linear regression was applied to associate EDys outcomes (FMD, PWVba, cIMT) with heart rate (beats/min) in the first three steps of the progressive volitional cycling Astrand test (i.e., $\frac{25}{50}$, $\frac{50}{100}$, and $\frac{75}{150}$ watt). The βeta value (for association), and R2 (for predictive capacity) were tested with heart rate for these EDys outcomes. In the regression model, each HR25–50, HR50–100, and HR75–125 watt was used as an independent model predictor of FMD, PWVba, and cIMT (in backward manner) adjusted for weight, height, gender, and SBP. These statistical analyses were performed with SPSS statistical software version 18 (SPSS™ Inc., Chicago, IL, USA), and statistical significance was set at p ≤ 0.05.
## 3.1. Baseline Characteristics
As was the nature of the study, there were higher significant values of blood pressure in SBP comparing HTN vs. CG (143.2 ± 9.1 vs. 110.4 ± 7.0, $p \leq 0.0001$), and Ele vs. CG (124.9 ± 2.6 vs. 110.4 ± 7.0, $p \leq 0.001$) (Table 1). Similar results were shown in DBP comparing HTN to. CG (87.3 ± 10.7 vs. 73.8 ± 7.2, $p \leq 0.0001$), and Ele to. CG (83.3 ± 7.9 vs. 73.8 ± 7.2, $p \leq 0.001$) (Table 1), in MAP comparing HTN to CG (105.9 ± 10.2 vs. 86.0 ± 7.1, $p \leq 0.0001$), and Ele to CG (97.1 ± 6.1 vs 86.0 ± 7.1, $p \leq 0.001$) (Table 1), and PP comparing HTN to CG (55.9 ± 1.6 vs. 36.6 ± 2.2, $p \leq 0.0001$), and Ele to CG (41.6 ± 5.3 vs. 36.6 ± 2.2, $p \leq 0.001$) (Table 1).
## 3.2. Anthropometric and Body Composition (Secondary Outcomes)
In BMI, there were significant differences between HTN vs. the Ele group (29.5 ± 4.7 vs. 29.7 ± 3.8 kg/m2, $p \leq 0.001$), and between Ele vs. the CG group (29.7 ± 3.8 vs. 26.2 ± 3.1 kg/m2, $p \leq 0.001$). There was a significant trend ($$p \leq 0.004$$) to increase BMI from the CG to the Ele and HTN groups (Figure 3A). In waist circumference, there were significant differences between HTN vs. the Ele group (99.8 ± 8.7 vs. 100.1 ± 10.1 cm, $p \leq 0.001$), and between the Ele vs. CG group (100.1 ± 10.1 vs. 90.2 ± 10.5 cm, $p \leq 0.001$) (Figure 3B). There was a significant trend ($$p \leq 0.001$$) to increase waist circumference from the CG to the Ele and HTN groups (Figure 3B). There were no differences among groups in terms of body fat (%), skeletal muscle mass, or body age.
## 3.3. Endothelial Dysfunction Parameters (Main Outcomes)
In FMD, PWVba, and cIMT there were no significant differences among groups (Figure 4A–C). There was a significant trend ($$p \leq 0.047$$) to increase PWVba from the CG to the Ele and HTN group (Figure 4B). In cIMT, there were no significant differences among groups (Figure 4C).
## 3.4. Heart Rate during Progressive Volitional Cycling Test in the HTN, Ele, and Control Groups
The description of heart rate during the progressive volitional cycling test is shown in (Figure 5). In the HTN group, the HRpredicted was of 177.7 beats/min, while the HRpeak in the cycling test was of 166.0 beats/min (Figure 5A). In the Ele group, the HRpredicted was 181.6 beats/min, while the HRpeak in the cycling test was 163.5 beats/min (Figure 5B). In the CG group, the HRpredicted was 180.0 beats/min, while the HRpeak in the cycling test was 152.8 beats/min (Figure 5C). Overall, each HTN, Ele, and CG normotensive group had a progressively increased heart rate from each cycling stage of 25–50 w, 50–100 w, 75–150 w, 100–200 w, and 125–250 w, from 96.8 to 166.0 in HTN (+69.2 beats/min), from 93.8 to 163.5 in Ele (+69.7 beats/min), and from 907 to 152.8 in CG (+62.1 beats/min) (Figure 5A–C). There were no significant differences in the HRR, HRpredicted, and HRpeak among groups (Figure 5D–F). The HRpeak showed a significant increasing trend from CG (146.4) to the Ele (156.2), and the HTN group (159.5 beats/min) (Figure 5F).
## 3.5. Association between EDys Outcomes FMD, PWVba, and cIMT with Different Heart Rate during a Progressive Volitional Cycling Test in HTN, Ele, and Control Normotensive Subjects
When FMD and PWVba were correlated with the HR25–50, HR50–100, and HR75–150 watts of power output in the progressive volitional cycling test, no significant correlations were found in each HTN, Ele, and CG group (Figure 6A–F). Similarly, no significant correlations were found between cIMT with the HR25–50, HR50–100 steps of the Astrand test (Figure 6G,H). Although cIMT do not show an association with HR75–150 in the CG and the Ele group, there was a significant correlation between cIMT with HR75–150 in the HTN group (R2 47.1, β −0.650, $$p \leq 0.038$$) (Figure 6I).
## 4. Discussion
The study found that there was a significant association between the vascular outcome cIMT and heart rate during the third stage of the cycling test in individuals with HTN (i.e., HR75–150). The heart rate during different stages of the test had a high predictive range for EDys outcomes, including FMD, PWVba, and cIMT in the HTN group. Additionally, there was a trend towards increased cIMT and PWVba in individuals with HTN compared to those with normal blood pressure, but no differences were observed in FMD. These findings were observed in HTN patients who had a higher prevalence of overweight/obesity as indicated by weight, BMI, and WC measurements.
Participants with HTN showed a significant association between the vascular outcome cIMT and heart rate during the third stage of the cycling test. The heart rate during different stages of the test had a high predictive range for EDys outcomes, including FMD, PWVba, and cIMT in the HTN group.
Previous studies have reported a direct relationship between cIMT and an attenuated chronotropic response in a stress test, as was observed in this study [28]. It seems that the link between cIMT and the attenuated chronotropic response to exercise is the imbalance of the sympathovagal activation, which reflects the baroreflex sensitivity dysfunction. This dysfunction has been described as a probable cause of early atherosclerosis risk factors such as inflammation [29,30]. Therefore, an impaired chronotropic response to exercise could be an indicator of the EDys presence in subjects without cardiovascular risk factors, including HTN, as we found [28]. The study also observed that individuals with HTN had a higher prevalence of overweight/obesity as indicated by weight, BMI, and WC measurements. These findings emphasize the importance of monitoring body weight and body composition as part of the HTN management and EDys.
The findings are consistent with previous research indicating that individuals with HTN are more likely to be overweight or obese, as evidenced by measures of weight, body mass index (BMI), and waist circumference (WC) (Table 1). Furthermore, studies have shown that individuals with HTN and other cardiovascular risk factors, such as physical inactivity and a significant consumption of unhealthy foods tend to have impaired vasodilation after standardized FMD trials, high PWVba, and high cIMT [3,6].
The present study’s findings regarding PWVba are consistent with those of Park et al., who found that PWVba and FMD increased as frailty status increased in older adults. Specifically, PWVba increased from the non-frail group (1615.7 ± 209.9 cm/s [i.e., 16.1 m·s−1]) to the pre-frail (1815.2 ± 265.0 cm/s [i.e., 18.1 m·s−1) and frail (1829.9 ± 256.0 cm/s [i.e., 18.2 m·s−1) groups, which is similar to the increasing trend observed in our normotensive (7.7) to elevated (8.4) and hypertensive blood pressure group (8.7 m·s−1) (Figure 4). Furthermore, Park et al. found that FMD was lower in the pre-frail and frail groups ($3.4\%$ and $3.1\%$, respectively) compared to the non-frail group ($5.2\%$), and with the frail group showing approximately two times lower FMD than the non-frail group [31]. In contrast, our study found that FMD was higher in the normotensive group ($17.3\%$) compared to the HTN group ($15.2\%$). However, this difference may be partially explained by the age difference between the two studies, as the average age of the groups in our study (HTN: 42.2 years, Ele: 38.3 years, CG: 39.9 years) was younger than the groups in the study by Park et al. ( non-frail: 74.1 years, pre-frail: 75.3 years, frail: 75.3 years).
Our study found a significant association between the vascular outcome cIMT and heart rate during the third step of a cycling exercise test in individuals in the elevated blood pressure group (Figure 5). Additionally, heart rate during the different stages of the exercise test had a high predictive range for EDys outcomes, including FMD, PWVba, and cIMT. There was also a trend towards increased cIMT in individuals with elevated blood pressure compared to those with normal blood pressure, but no differences were observed in FMD or PWVba. Previous research has not found an association between resting heart rate and vascular parameters such as FMD. Our findings, which show an association between heart rate during exercise and EDys parameters, could be useful for predicting the behavior of vascular parameters and avoiding more invasive, expensive, and lengthy clinical tests.
In conclusion, this study provides evidence for the association between markers of endothelial dysfunction and heart rate during a progressive volitional cycling test in individuals with HTN. These findings suggest that monitoring heart rate during exercise testing could be a useful tool for assessing the risk of EDys in individuals with HTN. In addition, the study highlights the importance of monitoring body weight and composition as part of the management of HTN and EDys.
## Strengths and Limitations
As for limitations, our study had several constraints. Firstly, all variables were only measured in the afternoon, and PWVba was measured using an oscillometric cuff digital device instead of the more commonly used tonometry method. However, the equipment used for PWVba measurement has been validated previously. Secondly, we did not measure metabolic or plasma parameters, as they were not the primary objectives of this study. Nevertheless, none of the participants reported receiving hypercholesterolemia/dyslipidemia treatment within the past 6 months.
## 5. Conclusions
Heart rate during a progressive cycling exercise test is associated with vascular parameters in hypertensive patients, particularly during the second and third stages of the Astrand exercise test, indicating a high predictive capacity for vascular parameters in HTN when compared to control normotensive peers. Further research is needed to investigate the mechanisms behind these results. |
# The Association between the Differential Expression of lncRNA and Type 2 Diabetes Mellitus in People with Hypertriglyceridemia
## Abstract
Compared with diabetic patients with normal blood lipid, diabetic patients with dyslipidemia such as high triglycerides have a higher risk of clinical complications, and the disease is also more serious. For the subjects with hypertriglyceridemia, the lncRNAs affecting type 2 diabetes mellitus (T2DM) and the specific mechanisms remain unclear. Transcriptome sequencing was performed on peripheral blood samples of new-onset T2DM (six subjects) and normal blood control (six subjects) in hypertriglyceridemia patients using gene chip technology, and differentially expressed lncRNA profiles were constructed. Validated by the GEO database and RT-qPCR, lncRNA ENST00000462455.1 was selected. Subsequently, fluorescence in situ hybridization (FISH), real-time quantitative polymerase chain reaction (RT-qPCR), CCK-8 assay, flow cytometry, and enzyme-linked immunosorbent assay (ELISA) were used to observe the effect of ENST00000462455.1 on MIN6. When silencing the ENST00000462455.1 for MIN6 in high glucose and high fat, the relative cell survival rate and insulin secretion decreased, the apoptosis rate increased, and the expression of the transcription factors Ins1, Pdx-1, Glut2, FoxO1, and ETS1 that maintained the function and activity of pancreatic β cells decreased ($p \leq 0.05$). In addition, we found that ENST00000462455.1/miR-204-3p/CACNA1C could be the core regulatory axis by using bioinformatics methods. Therefore, ENST00000462455.1 was a potential biomarker for hypertriglyceridemia patients with T2DM.
## 1. Introduction
Diabetes has become the third leading chronic disease that seriously endangers human health. In 2021, there were about 537 million people with diabetes worldwide, and this number is projected to reach 643 million by 2030 and 783 million by 2045. The prevalence of diabetes is on the rise, and over 6.7 million people will die from diabetes-related causes [1]. Type 2 diabetes mellitus (T2DM) is an endocrine and metabolic disease caused by a combination of genetic and environmental factors and characterized by fasting and postprandial hyperglycemia, which account for more than $90\%$ of diabetes [2]. Existing evidence indicates that people with T2DM have a $15\%$ increase in all-cause mortality compared with people without diabetes [3].
Pancreatic β cells play an essential role in maintaining glucose homeostasis [4]. Glucose is a major physiological regulator for pancreatic β cells and can be metabolized via pancreatic β cells, thereby stimulating insulin secretion [5,6]. However, in chronic hyperglycemic environments and sustained glucose metabolism, pancreatic β cells are prone to damage and dysfunction, resulting in defective insulin secretion [7]. In addition, dyslipidemia also plays an important role in the development of T2DM. On the one hand, the lipotoxicity caused by dyslipidemia could affect the development of insulin resistance, which in turn aggravates the occurrence of lipid metabolism disorders, and a vicious circle is established [8]. On the other hand, the accumulation of abnormally elevated triglycerides in pancreatic β cells leads to their dysfunction and the further apoptosis of pancreatic β cells, which eventually causes the disorder of insulin secretion and the increase of blood glucose, thus inducing T2DM [9]. Meanwhile, T2DM complicated with hyperlipidemia is more likely to induce complications such as cardiovascular and cerebrovascular diseases [10]. Therefore, whether from a public health or a clinical perspective, hypertriglyceridemia patients with T2DM should be paid more attention.
Long noncoding RNAs (lncRNAs) represent a class of transcripts longer than 200 nucleotides with limited protein-coding potential [11]. They affect downstream gene expression and promote/inhibit disease development mainly by binding to targeted mRNAs or serving as endogenous competing RNAs for miRNAs [12]. Studies have found that lncRNAs are related to the development of T2DM and its related diseases. For example, lncRNA PVT1 can regulate insulin secretion and lipid metabolism by affecting miR-20a-5p expression, and it is also associated with end-stage renal disease in T2DM patients [13,14]. The lncRNA MALAT 1 plays an important role in the pathophysiology, inflammation, and progression of T2DM through regulating gene transcription [15]. MEG3 is overexpressed in patients with T2DM and is closely related to the occurrence of diabetic retinopathy [16]. Meanwhile, more than 1000 lncRNAs have been found in human islet cells, many of which are highly islet-specific, suggesting that they could have important and unique roles in regulating pancreatic function [13]. Our study aims to screen the differentially expressed lncRNA between new-onset T2DM and normal blood glucose control in hypertriglyceridemia subjects, and then explore the effects and possible mechanism of lncRNA on pancreatic β cell function and activity, thus providing some references for the prevention and treatment of T2DM in people with hypertriglyceridemia.
## 2.1. Screening and Validation of Differentially Expressed lncRNAs
Blood samples of six newly diagnosed T2DM patients and six patients with normal blood glucose were used to perform RNA sequencing. Basic information of subjects and the situation of data filtering are shown in Tables S1 and S2, respectively. The cleaned data is used for subsequent analysis to ensure the quality of the analysis. We obtained a total of 3163 differentially expressed lncRNAs (1439 up and 1724 down) between the T2DM group and the control group based on a p value less than or equal to 0.05. The corresponding volcano plot and heat map are shown in Figure S1. Meanwhile, a total of 25 differentially expressed lncRNAs (10 up and 15 down) were found between the above two groups based on an adjusted p value less than or equal to 0.05 (Table 1).
Firstly, we analyzed the genes corresponding to the above 25 lncRNAs through the GSE 130,991 dataset, and a total of 13 genes were found in the dataset. Specially, the gene PLEKHM2, corresponding to the lncRNA ENST00000462455.1, was statistically significant (Table 2). Meanwhile, RT-qPCR was used to verify the expression levels of lncRNA ENST00000462455.1 in 120 hypertriglyceridemia T2DM patients and 120 hypertriglyceridemia patients with normal FPG. The results indicated that the expression level of ENST00000462455.1 in the T2DM subjects was decreased ($t = 5.673$, $p \leq 0.001$), and the same results were observed in gender and age subgroups (Figure 1). In addition, the ROC curve was used to assess the diagnostic power of ENST00000462455.1 (Figure S2 and Table S3).
## 2.2. Effects of lncRNA ENST00000462455.1 on the Activity and Function of MIN6 Cells
Firstly, we detected the localization and distribution of ENST00000462455.1 in MIN6 cells by FISH. As internal reference genes, 18S was almost located in the cytoplasm and U6 was almost located in the nucleus. The results of the FISH indicated that the ENST00000462455.1 was distributed in both the cytoplasm and the nucleus (Figure 2). Next, we analyzed the expression of ENST 00000462455.1 in MIN6 cells cultured for 24 h, 36 h, 48 h, 72 h, and 96 h for the control, HG, HF, and HG + HF groups. The results indicated that, compared with the HF group, the expression level of ENST00000462455.1 in MIN6 cells in the HG + HF group decreased after 48 h (HF vs. HG + HF: 1.92 ± 0.05 vs. 0.95 ± 0.17, $p \leq 0.001$), 72 h (HF vs. HG + HF: 2.06 ± 0.29 vs. 1.21 ± 0.17, $p \leq 0.01$), and 96 h (HF vs. HG + HF: 1.37 ± 0.05 vs. 1.07 ± 0.03, $p \leq 0.01$) of culture in the corresponding environment (Figure 3).
To further explore the effect of lncRNA ENST00000462455.1 on the activity and function of MIN6, the siRNA against ENST00000462455.1 was transfected into MIN6 to silence the expression of the lncRNA. The results of RT-qPCR confirmed that the silencing effect was stable (Figure S3). Subsequently, we explored the effect of ENST00000462455.1 on MIN6 activity by the CCK-8 assay. Taking the HF group as a reference, we found that the relative survival rate of MIN6 in the HG + HF group with si-lncRNA was lower than that in the si-NC group (si-NC vs. si-lncRNA: 1.24 ± 0.21 vs. 1.06 ± 0.16, $p \leq 0.05$) (Figure 4A). Similarly, by flow cytometry, we observed that the relative apoptosis rate of MIN6 in the HG + HF group with si-lncRNA was higher than that in the si-NC group (Figure 4B). Meanwhile, the insulin level in the supernatant of the MIN6 cultured under the corresponding glycolipid environment for 48 h was detected by ELISA, thus assessing the effect of ENST00000462455.1 on the insulin secretion of MIN6. The results showed that the insulin secretion of MIN6 in the HG + HF group with si-lncRNA was lower than that in the si-NC group (si-NC vs. si-lncRNA: 12.06 ± 0.70 mIU/L vs. 9.07 ± 1.20 mIU/L, $p \leq 0.001$; si-NC vs. si-lncRNA(relative): 1.90 ± 0.11 vs. 1.33 ± 0.18, $p \leq 0.001$) (Figure 4C). In addition, RT-qPCR was also used to detect the expression levels of relevant key transcription factors. Taking the HF group as a reference, we found the expression levels of Ins1, Pdx-1, Glut2, FoxO1, and ETS1 in the HG + HF group with si-lncRNA were lower than those in the si-NC group ($p \leq 0.05$) (Figure 4D). Therefore, under a high-glucose and high-fat environment, the decreased expression of lncRNA ENST00000462455.1 could lead to the lowering of MIN6 cell activity and the occurrence of dysfunction.
## 2.3. Exploration of ceRNA Mechanism for lncRNA ENST00000462455.1
We further explored the possible mechanism of ENST00000462455.1 by constructing a ceRNA network which included lncRNA ENST00000462455.1 and its corresponding 14 miRNAs and 118 mRNAs (Figure 5). Given that miRNAs play an important role in the ceRNA network, we identified key miRNAs by searching the literature. Based on the available evidence, we found that miR-204-3p and miR-125a-3p were associated with type 2 diabetes or pancreatic β cells dysfunction, and 29 mRNAs corresponding to these two miRNAs were found in the ceRNA network (Table S4). Subsequently, GO and KEGG analysis were performed on these mRNAs (Figure 6A,B). The results indicated that CACNA1C, CSRP1, ANXA6, KCNIP2, and DPYSL2 are enriched in multiple pathways of BP, CC, and MF (Tables S5–S7). In particular, the results of the KEGG analysis showed that CACNA1C was enriched in multiple pathways including type 2 diabetes and insulin secretion (Table S8). Meanwhile, compared with the control group, the GSEA results found that CACNA1C was a core gene and the expression of it was decreased in hypertriglyceridemia subjects with T2DM (Table S9, Figure S4). In addition, we explored the interaction of key mRNAs in the ceRNA by establishing a PPI network, and a key network module was identified by cluster analysis: CSRP1-ANXA6-DPYSL2-CACNA1C-RCAN1-KCNIP2 (MCODE score: 2.8) (Figure 6C,D). Based on the above results, the possible ceRNA regulatory axis of lncRNA ENST00000462455.1 is shown in Figure 6E. Among them, ENST00000462455.1/miR-204-3p/CACNA1C may be the core regulatory axis.
## 3. Discussion
Protein-coding RNAs account for only about $2\%$ of the genome [17,18]. Although noncoding RNAs do not have traditional RNA functions in protein translation, they have become novel basic regulators of gene expression. Existing evidence indicated that some lncRNAs in islet often map to the proximal end of related genes that related to function or development of pancreatic β cells and thus may have specific regulatory functions for the gene expression of pancreatic β cells [19,20,21]. In our study, transcriptome sequencing was first performed on whole blood samples of hypertriglyceridemia subjects with T2DM or normal FPG to get the differentially expressed lncRNAs. Subsequently, the differentially expressed lncRNA ENST00000462455.1 was verified by GEO and RT-qPCR, and its potential value in clinical settings was also assessed via ROC. In addition, compared with the HF environment, we found that the expression of ENST00000462455.1 in MIN6 cells decreased under the HG + HF environment. Therefore, lncRNA ENST00000462455.1 was viewed as a differentially expressed lncRNA in hypertriglyceridemia patients with T2DM and normal FPG.
We further explored the effect of ENST00000462455.1 on the function and activity of MIN6 cells. After silencing ENST00000462455.1, we found that the activity of MIN6 cells decreased and the apoptosis rate increased. Meanwhile, the insulin secretion was also reduced. In addition, the expression levels of transcription factors, including Ins1, Pdx-1, Glut2, FoxO1, and ETS1, were decreased after silencing ENST00000462455.1. As an inherent regulatory gene of insulin, Ins1 is regulated by circulating levels of glucose and plays an important role in maintaining mature pancreatic β cells mass and function, insulin secretion and reserve, and glucose homeostasis [22,23]. Similarly, the function of Pdx-1 is to maintain mature islet function, mass, and the regeneration of pancreatic β cells [24]. Meanwhile, Pdx-1 may also be a key factor related to the adverse effects of lipid metabolism disorders on pancreatic islets [25]. FoxO1 could regulate the proliferation, apoptosis, and differentiation of pancreatic β cells and play a role in insulin secretion and resistance to oxidative stress [26]. Simultaneously, FoxO1 is closely related to Ins1 and Pdx-1. Previous study found that FoxO1 transgenic mice significantly elevated the expression levels of Ins1 and Pdx-1 [27]. In fact, the relationship between FoxO1 and Pdx-1 has been confirmed during the development of the body. FoxO1 can activate itself in the early stage of pancreatic development by mediating the expression of Pdx-1 [28]. Specially, although the function of Glut2 is merely to catalyze the passive transport of glucose across plasma membranes, this transport activity is important for the control of cellular mechanisms impinging on gene expression, the regulation of intracellular metabolic pathways, and the induction of hormonal and neuronal signals, which together form the basis of an integrated interorgan communication system to control glucose homeostasis [29]. In addition, previous study also found that the overexpression of Ets-1 in MIN6 cells could protect them from severe hypoxic injury in a mitochondria-dependent method [30].
One of the main mechanisms of lncRNAs is that they can become endogenous competing RNAs for miRNAs affecting the expression of downstream genes, thereby promoting or inhibiting the development of diseases. In our study, ENST00000462455.1 was observed in both the cytoplasm and nucleus by FISH. Existing evidence indicated that lncRNAs stably expressed in the cytoplasm are ideal ceRNAs (although recent studies also found that some nuclear-localized lncRNAs could also act as ceRNAs). Therefore, we further constructed the ceRNA network of ENST00000462455.1 by the bioinformatics method and found that ENST00000462455.1/miR-125a-3p/RCAN1/DPYSL2 may be one of the regulatory axes. Previous studies have shown that miR-125a-3p could inhibit the expression of insulin receptors via the insulin signaling pathway, resulting in insulin resistance, thus leading to lipid and carbohydrate metabolism disorder [31]. Meanwhile, miR-125a-3p is also related to diabetic cardiomyopathy and diabetic nephropathy [32]. RCAN1 has a role in the pancreatic β cell dysfunction for T2DM [33]. Some studies found that the acute induction of RCAN1 by increased reactive oxygen species and hyperglycemia could inhibit endocrine cell apoptosis and protect them from damage. However, some evidence indicated that chronic overexpression of RCAN1 could also adversely affect cells, leading to pathological changes in neurons and endocrine cells associated with T2DM [33]. Therefore, more studies for the molecular mechanisms of RCAN1 need to be performed.
Another possible ceRNA regulatory axis is ENST00000462455.1/miR-204-3p/KCNIP2/CACNA1C/ANXA6/CSRP1. Among them, ENST00000462455.1/miR-204-3p/CACNA1C may be the core regulatory axis. Previous studies found that the expression of miR-204 is increased in pancreatic islets of T2DM and elevated serum miR-204 is a marker of ongoing pancreatic β cell death [34]. Meanwhile, miR-204 can directly target and inhibit the endoplasmic reticulum transmembrane factor protein kinase R-like endoplasmic reticulum kinase (PERK) and its downstream signaling pathways, thereby aggravating ER-stress-induced pancreatic β cell apoptosis [35]. As a chain of miR-204, miR-204-3p is involved in various diabetic complications. In diabetic cataract, miR-204-3p can regulate the migration and epithelial-to-mesenchymal transition in lens epithelial cells [36]. Meanwhile, miR-204-3p also plays a role in high-glucose-induced podocyte apoptosis and dysfunction [37]. In addition, for diabetic cardiomyopathy, miR-204-3p can regulate cardiomyocyte autophagy, thus affecting myocardial ischemia/reperfusion injury [38].
Voltage-gated calcium channels (VGCCs) and potassium channels are important to insulin secretion [39,40,41]. Among them, the L-type voltage-gated calcium channels (LVGCCs) are present in pancreatic β cells and are involved in glucose transport, lipolysis, and lipogenesis [42,43]. Although LVGCCs account for only ∼$50\%$ of the total Ca2+ current, their inhibition reduces glucose-induced insulin secretion by $80\%$ and nearly abolishes insulin release in vivo [44]. In humans, the two main LVGCCs are Cav1.2 and Cav1.3, and CACNA1C is the encoding gene of Cav1.2. It was found that Cav1.2 was required for first-phase insulin secretion and rapid exocytosis in pancreatic β cells, and the expression level of CACNA1C was also high in the cells [45,46]. In mice, Cav1.2 was the only LVGCC and the knockout of CACNA1C was lethal (glucose intolerance and loss of first-phase insulin secretion were observed) [47]. In addition, CACNA1C is also involved in diabetic peripheral neuropathy, diabetic heart disease, and diabetic cataract [48,49,50]. KCNIP2 (encodes the KChIP2 protein) interacts with the subfamily of the voltage-gated potassium channel to increase the current density, accelerate the recovery from inactivation, and slow inactivation kinetics [51]. Existing evidence indicated that the lack of insulin signaling in the heart of T2DM patients may be one of the mechanisms for the decreased expression of KCNIP2, which in turn leads to abnormal changes in cardiac electrophysiology [52]. In addition, ANXA6 is involved in cholesterol transport, accumulation, and storage of TG, and plays an important role in the glucose and lipid balance by regulating the release of adiponectin [53,54,55].
Some limitations exist in this study. We used MIN6 cells, a mouse pancreatic beta cell line, for the experimental verification of lncRNA ENST00000462455.1 functions. Considering the species difference, the effect of this lncRNA on T2DM of human needs further evaluation. Moreover, the study lacked corresponding animal model verification. Meanwhile, our study only used bioinformatics methods to explore the possible ceRNA regulatory mechanism of ENST00000462455.1, and further experimental verification is required.
## 4.1. Participants
Six newly diagnosed T2DM patients and six patients with normal blood glucose were recruited to perform RNA sequencing. All subjects were Han Chinese, aged 40–65 years, and were recruited at the First Hospital of Jilin University from July to September 2020. Patients were diagnosed based on the guidelines for the prevention and control of type 2 diabetes in China (2017 Edition): Patients with type 2 diabetes were defined as fasting plasma glucose (FPG) ≥ 7.0 mmol/L or oral glucose tolerance test (OGTT) two-hour blood glucose ≥ 11.1 mmol/L. FPG < 6.1 mmol/L and OGTT < 7.8 mmol/L were defined as the normal controls. Meanwhile, the level of triglycerides (TG) in all participants was ≥1.7 mmol/L according to the guidelines for prevention and treatment of dyslipidemia in China (2016 Edition). All patients had not previously controlled their blood glucose through drugs or other treatments. Moreover, the corresponding genes of the lncRNAs were verified via the GSE 130,991 dataset (910 samples). A total of 92 T2DM and 96 controls with hypertriglyceridemia were selected from the dataset based on the above guidelines. Meanwhile, we also collected 120 T2DM and 120 controls with hypertriglyceridemia to perform RT-qPCR validation at the First Hospital of Jilin University from July to August 2021.All patients with a history of coronary artery disease (CAD), hypertension, atrial fibrillation, myocardial infarction, tumor, acute infectious disease, immune disease, and hematological disease were excluded from the study. All participants provided written informed consent and the study was approved by Ethics Committee of the Public Health of the Jilin University, and the privacy of the participants are strictly confidential.
## 4.2. RNA Sequencing
Total RNA in blood was isolated and purified using a total RNA extraction kit. The NanoPhotometer® spectrophotometer (IMPLEN, Westlake Village, CA, USA) and RNA Nano 6000 Assay Kit of the Agilent Bioanalyzer 2100 system (Agilent Technologies, Santa Clara, CA, USA) were used to assess the RNA purity and integrity, respectively. The chain-specific library was constructed by removing the ribosomal RNA. After the library was qualified, Illumina PE150 sequencing was performed according to the pooling of the effective concentration of the library and the data output requirements. Followed by the sequencing, data filtering was conducted: we removed reads with adapter and N (N means that the nucleobase information cannot be determined) ≥ 0.002, and the paired reads that contain low-quality nucleobases (>$50\%$) in single-end reads were also removed. Meanwhile, the Q20, Q30, and GC content were calculated, and the clean reads were obtained. Subsequently, the mapping analysis was performed by the software Hisat2 for the corresponding clean reads. The reference database was GRCh38.p12 (human) and GRCm38.p6 (mouse). Based on the mapping results, we further assembled, filtered, and quantified the transcripts by using the Stringtie and Cuffmerge software. Finally, the expression level matrix was obtained. All analyses in the study were based on the data and the data could be found in GEO database (GSE193436).
## 4.3. Real-Time Quantitative Polymerase Chain Reaction (RT-qPCR)
The total RNA was extracted using the MolPure® Blood RNA Kit (19241ES50, YEASEN) or MolPure® Cell RNA Kit (19231ES50, YEASEN) based on the sample type. Subsequently, we used the lnRcute lncRNA First-Strand cDNA Kit (KR202, TIANGEN) or FastKing gDNA Dispelling RT SuperMix (KR118, TIANGEN) to conduct reverse transcription. The cDNA was then analyzed by RT-qPCR using lnRcute lncRNA qPCR Kit (FP402, TIANGEN) or SuperReal PreMix Plus (SYBR Green) (FP205, TIANGEN) on the QuantStudio 3 system (Applied Biosystems, Waltham, MA, USA). The PCR primers are shown in Table S10. Expression data were normalized to the expression of β-actin with the 2−ΔΔCt method.
## 4.4. Cell Culture
MIN6 cells (mouse pancreatic beta cell line) were cultured in RPMI Medium 1640 (31800, Solarbio, Beijing, China) supplemented with $10\%$ fetal bovine serum (FBS) (04-001-1A, Biological Industries, Cromwell, CT, USA) at 37 °C with $5\%$ CO2.
## 4.5. Fluorescence In Situ Hybridization (FISH)
RiboTM lncRNA FISH Probe Mix (lnc11001001, RIBOBIO) and RiboTM Fluorescent in Situ Hybridization Kit (C10910, RIBOBIO) were used for the FISH of lncRNA, thus detecting the distribution of the target lncRNA. The cell slides were placed at the bottom of a 24-well plate and each well was plated with 1 × 105 cells. After the cells had grown to about $80\%$, the cells were washed with phosphate-buffered saline (PBS) and fixed with $4\%$ paraformaldehyde. Subsequently, the cells were washed again and treated with permeabilization solution, then 200 μL of prehybridization solution was added and the cells were blocked at 37 °C for 30 min. The prehybridization solution was discarded and 100 μL of the hybridization solution containing the lncRNA FISH probe was added for overnight hybridization at 37 °C. Next day, the cells were washed by PBS and stained with DAPI and photographed by fluorescence microscopy, with 18S and U6 as the reference genes.
## 4.6. Construction of Cellular Environment and Determination of lncRNA Expression
Based on different glycolipid environments, our experiment was divided into four experimental groups: control (5 mmol/L D-glucose + PBS), high glucose (HG) (30 mmol/L D-glucose + PBS), high fat (HF) (5 mmol/L D-glucose + 400µmol/L sodium palmitate), and high glucose and high fat (HG + HF) (30 mmol/L D-glucose + 400µmol/L sodium palmitate) [56,57]. The expression of the target lncRNA in each group was determined by qRT-PCR after 24 h, 36 h, 48 h, 72 h, and 96 h.
## 4.7. Cell Transfection
The siRNA was transfected by liposome reagent transfection to silence the target lncRNA. Corresponding sequence of siRNA was shown in Table S11. Firstly, six-well plates were seeded with 2 × 105 cells per well. After 24 h, siRNA against target lncRNA (GenePharma) was transfected into cells by using Lipofectamine 2000 (11668019, Invitrogen). After incubation at 37 °C with $5\%$ CO2 for 6 h, the medium was changed to complete medium (supplemented with $10\%$ FBS) for another 24 h. Subsequently, the RNA in the cells was directly extracted or further cultivated in different glycolipid environments for 48 h and the expression level of the target lncRNA in the negative control group (si-NC) and experimental group (si-lncRNA) was detected to evaluate the effect of transfection.
## 4.8. CCK-8 Assay
The cells were seeded in 96-well plates (4 × 103 cells per well). After the lncRNA was silenced, corresponding glycolipid environment were constructed for 48 h, and then 10μL of CCK-8 reagent (CK04, Dojindo) was added to each well. Subsequently, the plate was incubated for another 1–4 h and the absorbance values were measured at 450 nm with an enzyme-linked immunometric meter.
## 4.9. Apoptosis Assay
Cell apoptosis was detected by the FITC Annexin V Apoptosis Detection Kit I (556547, BD BIOSCIENCES PHARMINGEN). Firstly, cells were seeded in 6-well plates (2 × 105 cells per well). After the lncRNA was silenced, the corresponding glycolipid environments were constructed for 48 h. Then, the original medium in the plate was discarded and cold PBS was added to wash the cells. Subsequently, 1 × binding buffer was added to each well and the cells were stained with FITC and PI. After 15 min incubation protecting from light, flow cytometry analysis was performed by using a FACSCalibur (BD BIOSCIENCES PHARMINGEN).
## 4.10. Enzyme-Linked Immunosorbent Assay (ELISA)
Insulin secretion was assessed by ELISA. Similarly, cells were seeded in 6-well plates (2 × 105 cells per well). After the lncRNA was silenced, the corresponding glycolipid environments were constructed for 48 h. Then, the supernatant was collected and detected by Mouse INS ELISA kit (ml001983, mlbio). All experiments were performed strictly in accordance with the manufacturer’s instructions.
## 4.11. Detection of Transcription Factor Levels of Pancreatic β Cell Function and Activity
Cells were seeded in 6-well plates (2 × 105 cells per well). After the lncRNA was silenced, corresponding glycolipid environments were constructed for 48 h. Subsequently, RT-qPCR was used to detect the transcription factors of pancreatic β cell function and activity (Ins1, Pdx-1, MafA, Glut2, TCF7L2, FoxO1, ETS1, Pax6, Ngn3).
## 4.12. Statistical Analysis
Normal continues variables were described by mean and standard deviation. Meanwhile, median and interquartile ranges were used to describe the skewed continues variables. Correspondingly, the t-test and Wilcoxon rank-sum test were conducted based on the data distribution. Chi-square test was conducted for categorical variables. One-way ANOVA was used for comparison among multiple groups, and LSD was performed for pairwise comparison. The diagnostic value of the lncRNA for T2DM in hypertriglyceridemia subjects was evaluated by the ROC curve. All above analyses were mainly performed by SPSS 24.0 and GraphPad Prism 7.0 software. A 2-sided p value less than 0.05 was considered significant. Independent replicated experiments were conducted in our study.
R 4.0.4, Cytoscape 3.8.2 and GSEA 4.2.1 software were used to conduct bioinformatics analysis. Differentially expressed genes were screened using the limma package [58] and the correlation between genes was analyzed by Pearson correlation. Meanwhile, the ggplot2 [59] and pheatmap [60] packages were used to draw the volcano plot and heat map, respectively. The ceRNA network construction strategy of the target lncRNA is shown in Figure S5, and Cytoscape was used to draw the networks. The clusterProfiler package [61] was used for GO (including Biological Process (BP), Cellular Component (CC), and Molecular Function (MF)) and KEGG enrichment analysis, and corresponding enrichment circle maps were drawn via the online analysis tool (https://www.omicsshare.com/tools/, accessed on 13 November 2021). Gene Set Enrichment Analysis (GSEA) was performed using GSEA software. In addition, PPI network analysis was performed by STRING 11.5 (http://string-db.org, accessed on 12 November 2021) and Cytoscape, and the MCODE was used to conduct cluster analysis in PPI network.
## 5. Conclusions
The lncRNA ENST00000462455.1 is a potential biomarker for hypertriglyceridemia patients with T2DM. More experimental studies are needed to verify the function of the lncRNA and analyze its possible mechanism. |
# Clustering of the Adult Population According to Behavioural Health Risk Factors as the Focus of Community-Based Public Health Interventions in Poland
## Abstract
Effective lifestyle health promotion interventions require the identification of groups sharing similar behavioural risk factors (BRF) and socio-demographic characteristics. This study aimed to identify these subgroups in the Polish population and check whether local authorities’ health programmes meet their needs. Population data came from a 2018 question survey on a random representative sample of 3000 inhabitants. Four groups were identified with the TwoStep cluster analysis method. One of them (“Multi-risk”) differed from the others and the general population by a high prevalence of numerous BRF: $59\%$ [$95\%$ confidence interval: 56–$63\%$] of its members smoke, $35\%$ [32–$38\%$] have alcohol problems, $79\%$ [76–$82\%$] indulge in unhealthy food, $64\%$ [60–$67\%$] do not practice recreational physical activity, and $73\%$ [70–$76\%$] are overweight. This group, with an average age of 50, was characterised by an excess of males ($81\%$ [79–$84\%$]) and people with basic vocational education ($53\%$ [50–$57\%$]). In 2018, only 40 out of all 228 health programmes in Poland addressed BRF in adults; only 20 referred to more than one habit. Moreover, access to these programmes was limited by formal criteria. There were no programmes dedicated to the reduction of BRF exclusively. The local governments focused on improving access to health services rather than on a pro-health change in individual behaviours.
## 1. Introduction
Over the past ten years, local authorities have increasingly prioritised the health and well-being of local communities. Their activities concern various aspects of social life, including organisation of medical care, ensuring a clean environment, stable and affordable housing, safety, preventing addictions, and many others. In many countries, public health services have been entrusted to local governments by acts of national parliaments (in force, e.g., in the Netherlands since 2008, in Norway since 2012, and in England since 2013) [1]. Their actions are to equalise the distribution of factors that directly or indirectly affect the health of individuals and communities. Also, in Poland, local authorities, by law, carry out public health tasks. They develop, implement, and finance health programmes (called health policy programmes)—sets of actions targeting specific problems in their communities. This paper discusses interventions in the area of health promotion and disease prevention proposed by Polish local authorities to reduce behavioural risk factors.
Such actions are crucial in Poland, where the percentage of deaths due to cardiovascular diseases is distinctly higher than the average in the EU (in 2017, $43\%$ vs. $37\%$) [2]. According to the Global Burden of Disease Study 2019, in Poland the high percentage of total deaths is attributable to behavioural (thus modifiable) risk factors ($44\%$ vs. $37\%$ in UE) [3]. These numbers do not acknowledge the burden connected with excessive body weight, considered a metabolic factor, that contributes to a further $14.2\%$ of deaths (against $10.9\%$ in UE). Moreover, health problems in Poland are concentrated on specific demographic and social groups. According to EUROSTAT [4], the difference between men’s and women’s life expectancy equals eight years and is one of the highest among EU countries. Differences in health status and life expectancy have been reported for inhabitants of large cities and small towns [5]. The prevalence of smoking or overweight/obesity varies with the level of education [6].
An effective health policy in this area must recognise how harmful habits are distributed in the population [7]. Moreover, behavioural risk factors tend to aggregate, which has important implications for preventive medicine and health promotion [8]. This co-occurrence and the concentration of risk factors in specific population subgroups have already been thoroughly documented in many countries [9,10]. These are the cases of associating excessive drinking with smoking [8,10,11] and the concentration of bad habits among people who are less educated [10,12] and with lower socio-economic status [10,11,13]. Some of these correlations are ambiguous, e.g., the relationship between the level of physical activity and smoking differs in various countries and for various social groups [14,15].
In Poland, such studies are undertaken rarely and usually on a local scale—regarding professionally active subpopulations or inhabitants of one region [16]. The exception is the Polish Multicentre National Population Health Examination Survey (WOBASZ) that has been conducted twice (2003–2004 and 2013–2014). The comparison of both editions of this survey suggests that despite changes over ten years (both favourable—regarding smoking, as well as unfavourable—more frequent obesity, reduction in physical activity among men), the percentage of Poles with a healthy or unhealthy lifestyle remained unchanged: $2\%$ and $25\%$, respectively [17]. However, no attempt has been made to describe population groups characterised by multiple risk factors. Considering the mentioned needs and the gap in knowledge concerning the co-occurrence of behavioural risk factors in the population of adult Poles, we undertook the study presented in this paper.
The main objectives of the study were:Identification of the groups of adult individuals in Poland who share various features in terms of risk factors for health (behavioural, overweight, lack of vaccination and preventive medical examinations) and socio-demographic characteristics based on the results of a nationwide survey;Checking whether the most exposed people find support in the interventions taken by local authorities by reviewing all health programmes proposed for realisation in the year of the survey.
## 2.1. Survey
The questionnaire survey on the prevalence of health risk factors conducted in Autumn 2018 was based on a random sample of 3000 inhabitants of Poland aged 20 and above. The sample was drawn from the Universal Electronic System for Registration of the Population (PESEL). In order to ensure the intended number of subjects at the expected response rate of $50\%$, 6000 people were drawn; the interviews were conducted until the assumed sample of 3000 respondents was reached. The sampling scheme used included the population stratification according to the province of living and residence location class (in 6 categories: rural areas, towns with a population of up to 20,000, 20–100,000, 100–500,000, 500,000–1 million, and the largest city in the country with 1.8 million inhabitants—Warsaw) and two stages of drawing lots (first communes within the strata, then inhabitants of the selected communes in the gender and age proportions appropriate for the stratum). The obtained sample was representative for the national population in terms of sex, age, province of living, and the share of urban and rural residents.
Experienced interviewers conducted the survey using the computer-assisted personal interviewing (CAPI) method. The collected data regarded socio-demographic characteristics, height and body weight (in self-assessment), selected lifestyle-related health behaviours, use of medical care, and financial difficulties.
The results of the survey allow for the estimation of risk factor prevalence in the national population (after corrections for differences in sex and age structures between the final sample and the population). They also enable the identification of groups of individuals who share health behaviours and socio-demographic characteristics, i.e., involving people potentially in need of assistance in similar scope and form.
## 2.2. Statistical Analysis
In order to identify the groups mentioned above, cluster analysis was used, grouping individuals not risk factors. The TwoStep cluster analysis method was applied. It is often utilised in similar studies because it enables the simultaneous use of continuous and categorical variables and aids in determining the optimal number of clusters—their number does not necessarily need to be known a priori [11,13]. The following variables were used for cluster identification:➢Binary Sex;*Marital status* (in the following layout: married or cohabitant vs. single);Living in rural (vs urban) areas;Smoking (currently);Problems with alcohol (affirmative answer for 3 questions: (a) Have you ever thought you were drinking too much alcohol? ( b) Have people ever irritated or annoyed you with their comments regarding your alcohol-drinking habits? ( c) Have you ever felt bad or felt guild because of drinking alcohol?);Overweight (BMI ≥ 25);Lack of recreational physical exercises (sport, gymnastics, jogging, cycling, etc.)—spending less than 10 min per week on physical activity resulting in at least raised respiratory or heart rate during the spring–summer and autumn seasons;Unhealthy products in one’s diet (fast food meals; sweet, carbonated beverages; or sweets several times a week);Too little vegetable/fruit intake in one’s diet (fewer than 5 portions a day);Eating fish less frequently than once a week;Lack of preventive medical examinations (diagnostic laboratory tests, cytology, mammography, colonoscopy) or vaccination in the last 3 years;➢Categorical Education (in the following layout: primary, basic vocational, secondary, tertiary);➢Quantitative Age (in years).
After identifying clusters, their characteristics were found within the scope of the above variables (percentage or median with a $95\%$ confidence interval—$95\%$ CI presented in square brackets). Analogous values were also calculated for additional features (including financial difficulties—insufficient money to buy food, basic clothes, or paying monthly bills in the last year—and the need for medical consultation in the last year) not included directly in the clustering procedure due to their correlations with the used variables. They were used in the discussion of obtained results.
The chi-square and Kruskal–Wallis tests were applied for qualitative and quantitative variables, respectively, when comparing the characteristics determined for particular clusters. The statistical significance of observed differences was adjusted for multiple comparisons (Bonferroni correction). In order to eliminate the influence of differences in the age structure and education level between the compared groups on the prevalence of overweightness and obesity, direct standardisation of rates was applied; the national population served as the reference population.
In all statistical tests, the assumed significance level was 0.05. The analysis was conducted with the use of SPSS12.PL package.
## 2.3. Health Programmes
In the next stage of the study, the health programmes planned for implementation in the year of the survey [2018] were analysed to determine the extent to which they reflected the population’s needs in limiting behavioural risk factors.
The complete data come from the ProfiBaza information system [18], which stores information about public health interventions in Poland, including all health programmes submitted for assessment by the state Agency for Health Technology Assessment and Tariff System (AOTMiT). Under the provisions of law, the realisation and financing of each programme needs the sanction of the President of this institution.
## 3.1. Cluster Analysis
The prevalence of main health risk factors in the studied group can be considered an estimate for the Polish population; differences in results after adjustment for the age structure do not exceed 0.5 percentage points. In Poland, $30\%$ [29–$32\%$] of the population smoke, $13\%$ [12–$15\%$] have drinking problems, $67\%$ [65–$68\%$] indulge in unhealthy products in their diet, the same percentage eat too few vegetables and fruit, $47\%$ [45–$48\%$] do not practice physical activity in their free time, $50\%$ [48–$51\%$] are overweight, and $44\%$ [42–$45\%$] do not undergo preventive medical examinations or vaccination (Table 1).
The analysed characteristics are not distributed evenly in the population; thus, four population clusters can be distinguished. These received the subjective names: 1—“The youngest” (covering $29\%$ [27–$31\%$] of the adults), 2—“Multi-risks” ($26\%$ [25–$28\%$]), 3—“The oldest” ($27\%$ [25–$28\%$], 4—“Healthy lifestyle” $18\%$ [17–$20\%$]. Relative to the general population, the main risk factors in cluster 1 are a high number of unhealthy products in the daily diet and no vaccination/preventive medical examinations. In cluster 2, all risk factors occur much more frequently than in the general population. Cluster 3 is characterised by a lack of physical activity and overweight/obesity. In cluster 4, all risk factors are significantly less common than in the general population. The description of the clusters is shown in Table 2.
The distributions of particular features and statistical significance of differences between the clusters are presented in Table 1 (variables used in the clustering procedure) and Table 3 (additional characteristics of identified clusters not directly used in the clustering process).
The “Multi-risks” cluster unfavourably deviates from the other clusters in terms of the prevalence of behavioural risk factors; only the lack of recreational physical activity is as frequent ($64\%$) as among “The oldest”. The latter group, however, consists of people on average 14 years older and, as indicated by the age-specific rates, more active up to the age of 69 (Figure 1). Equal percentages of inactive people result from a high excess of people over 70 years of age in this cluster. The age structure of members of all clusters is presented in Figure 2.
Extra body weight constitutes a severe problem in two clusters—it concerns $90\%$ of “The oldest” and $73\%$ of “Multi-risks” clusters; the percentage of obese people is $24\%$ and $14\%$, respectively. In this case, age-specific coefficients among “The oldest” are much higher—after standardisation by age, the percentage of overweight people was $94\%$ vs. $71\%$, whereas the obesity rate was $22\%$ vs. $14\%$. The effects also do not originate from differences in the education structure—after standardisation by education level, the overweight rate is $93\%$ vs. $75\%$. Among “The oldest”, the problem prevails more often, despite the clearly healthier diet (Table 1 and Table 3).
## 3.2. Review of Health Programmes
In 2018, AOTMiT received 228 health programmes for assessment. Local governments had developed $97\%$ of them for realisation in their administrative units. The Ministry of Health submitted the remaining seven programmes ($3\%$ of the total) for nationwide application. Their nature is summarised in Figure 3.
Almost $40\%$ of the total were intended solely for children and adolescents. Adults were most often (in 51 out of 139 programmes) offered vaccination (optional in the country, the most often against influenza—Figure 3). In $80\%$ of cases available to people over 60 years of age. As many as 46 programmes were devoted to the improvement of accessibility of healthcare for people with diagnosed health problems. Forty-two programmes offered participation in the screening examination.
Among health programmes aimed at adults, 40 ($29\%$) dealt with the issue of behavioural risk factors, either in the context of healthcare or diagnostics. Half of the programmes in question acknowledged intervention in the scope of one behavioural factor (physical activity in 12 cases, nutrition in 5, smoking in 3), 11 programmes combined physical activity and nutrition, whereas 9 addressed three or more factors (Table 4).
All health programmes precisely specify the age range of recipients; $34\%$ of programmes directed at adults involved people solely over the age of 60 or 65. However, there are no consistent, medically, and socially justified criteria for determining the age limits for the availability of these programmes, e.g., the difference in the eligibility age between particular osteoporosis prevention programmes is 10 years.
The aim of this study has not been to assess the substantive aspects of the presented health programmes (AOTMiT negatively reviewed $19\%$ of programmes directed at adults, but formal shortcomings of the projects could also cause it). However, the presented data indicate that their authors neither consider the co-occurrence of risk factors nor the characteristics of groups with such unfavourable habits.
## 4. Discussion
This study identified four clusters in the Polish population, each of which shared the same health risk factors. Similarly to other countries, a group with a healthy lifestyle was found [9]. Age-related factors characterised the following two clusters: “The youngest”—the most physically active, overusing unhealthy food, and not interested in vaccination or preventive examinations; and “The oldest”—mostly women, avoiding smoking and alcohol, low physical activity, and generally overweight ($90\%$ overweight, $24\%$ obese). Such phenomena as excessive consumption of fast-food meals by young people or low physical activity of older women are well known [11].
The most significant outcome is the identification of the “Multi-risks” cluster that combines most behavioural health risk factors. It seems that this group, constituting approximately one-fourth of the adult population, determines the high excess mortality rate of men in Poland. It mainly consists of males ($81\%$), $59\%$ smoke, $35\%$ have alcohol problems, $83\%$ eat too few vegetables and fruit, $79\%$ indulge in unhealthy food, $64\%$ are physically inactive, and $73\%$ are overweight. The average age of its members is 50; more than half have basic vocational education. The existence of such a group has been reported in other countries [9,13]. It was also observed that subgroups with lower education engage in poor behaviours more often [7,10,11]. This “Multi-risks” group needs urgent intervention in the field of health promotion, also undertaken at local levels, aimed at lifestyle changes to help eliminate or limit several risk factors in one person.
Meanwhile, local authorities in Poland mainly focus on providing access to medical services—one-third of the health programmes directed at adults were devoted to treating or rehabilitating people with a diagnosed disease. In the scope of prevention, adult inhabitants were most often offered free vaccination ($37\%$ of programmes).
Other limitations in the availability of programmes result from the recipients’ age; $40\%$ of programmes are intended for children and adolescents and $34\%$ target adults over 60 or 65. Consequently, there is a shortage of programmes involving people at about 50 years of age who have multiple health risk factors but have not been diagnosed with one of the supported diseases—only 12 such programmes were available in 2018 ($9\%$ of these devoted to adults).
Local authorities do not implement health programmes aimed solely at reducing the prevalence of health risk factors. Although included in $29\%$ of adult-oriented programmes, they were always combined with rehabilitation (thus intended for patients) or screening. Moreover, half of them regarded only one risk factor.
The effectiveness of multiple-risk interventions remains open. *In* general, considering the synergy of individual risk factors and the economic aspect of intervention or the lack of it, such actions have a more significant impact on public health than those targeted at single risk factors [9]. However, comparing the effectiveness of both strategies (simultaneous vs. sequentially delivered multiple health behaviour change interventions) can be inconclusive [19]. Moreover, a meta-analysis of 69 trials involving over 73,000 people revealed that interventions covering education and skill training aimed at many risk behaviours simultaneously, only result in changes concerning daily diet and physical activity, whereas the strategy of simultaneous reduction of smoking and other risk factors might be sub-optimal [20]. Regardless of the effectiveness of particular strategies of multiple risk interventions, even if a person manages to eliminate one risk factor, they may have no chance of receiving support for the successive elimination of further factors.
The efficiency of Polish health programmes is additionally affected by the fact that they do not differentiate the scope or methods regarding recipients’ sex or education level and neglect their culture of health (conditioned by age, education, and social status). People from the identified clusters differ in terms of lifestyle and attitude towards their own health—they represent diverse cultures of health. Over one-third of people in the “Multi-risks” cluster did not feel the need to receive medical help or even consultancy in the last year, and almost three-quarters did not undertake any preventive measures. Their physical activity is substantially lower than among “The youngest” (inactive $64\%$ vs. $24\%$). This difference does not result only from their age. The percentage of inactive members of the “Multi-risk” cluster at 20–40 already exceeds $60\%$ (Figure 1). On the other hand, both “The oldest” and “Healthy lifestyle” clusters comprise mainly females who do care for their health (diet and preventive actions). They clearly differ, however, in terms of age, education level, financial resources (frequency of financial difficulties), and also, possibly, the social support level (frequency of being in a long-term relationship) and opinion on socially accepted women’s behaviours (smoking and alcohol). Different problems in these groups require individualised solutions regarding the provided information and training of particular personal skills. Meanwhile, to increase the persuasive effect of communications in health promotion, effectively broadening recipients’ knowledge and their preparation for making medical decisions, it is recommended to apply culture-sensitive health communication adjusted to beneficiaries’ cultural backgrounds [21].
Thus, tailored interventions, even those conducted using computer-aided methods, are strongly recommended [22,23]. For instance, the acknowledgment of the occupational setting discussed concerning blue-collar workers [24] can be of great importance in Poland, where three-quartes of people in the “Multi-risks” cluster are of professionally active age (below 60) and over half have a basic vocational education.
It should be concluded that local governments’ activities in health promotion and disease prevention are insufficient to ensure control over risk factors for a national population of almost 38 million. There is a need for interventions at the central level, realised with the use of primary health care [25] and maybe also occupational medicine (the effectiveness of workplace-based policies is still under debate) [26]. Nevertheless, any party undertaking such actions, including local governments, should consider the existence of a group particularly affected by behavioural risk factors that need urgent and comprehensive help [11]. These people should be the target of appropriate interventions for this reason, not as residents of a certain age or patients needing treatment or rehabilitation for a specific disease. This study is aimed at identifying and describing this group.
The limitation of the study that should be discussed is the age of the subjects. The analysis of the prevalence of behavioural risk factors, the identification of clusters, and the review of available health programmes concern people aged 20 years or older. However, it has already been proven that many harmful health behaviours start at a younger age. Adult smoking begins in adolescence [27] and nutrition in childhood influences the risk of later obesity [28]. The Health Behaviour in School-aged Children (HBSC) study results show that among 15 year olds in Poland in 2018, $12\%$ regularly smoked (including $5\%$ daily), $26\%$ ate sweets every day, and only $27\%$ met the WHO recommendations for moderate-to-vigorous physical activity [29]. The analysis of health programmes addressed to children and adolescents is purposeful and planned to be carried out in the future.
The timing of the question survey [2018], i.e., before the outbreak of the COVID-19 pandemic, may also be questionable. However, it turned out that in the following years, the number of health programmes decreased significantly [18]. This tendency was evident during the COVID-19 pandemic. In 2019, 195 programmes were submitted for assessment, in 2020 it was 97, whereas in 2021there were only 80. At the same time, the need for aid increased. In many countries, the lockdown unfavourably affected the population’s health behaviours. The prevalence of overweightness and obesity increased due to limited physical activity and changes in dietary habits (eating more frequently and snacking) [30,31,32,33]. In Poland, there is a visible aggravation of previously practised unfavourable habits—over $45\%$ of smokers did it more frequently during the lockdown and a stronger tendency to drink more was found among alcohol addicts. Similarly, older, so in general, heavier people were more likely to gain weight, whereas those underweight tended to lose it further [34]. These results confirm that the survey has not lost relevance and suggest that the population grouping according to risk factors could become even stronger. Thus, tailored interventions aimed at reducing multiple risk factors will be increasingly needed to prevent further consolidation of risk factors in certain social groups that would exacerbate the previously observed health inequalities [33].
## 5. Conclusions
Among inhabitants of Poland, one can distinguish four population groups that differ in terms of the prevalence of behavioural health-related risk factors and socio-economic situation. Similarly to other countries, a “Multi-risks” cluster was identified. It constitutes approximately one-quarter of the adult population and differs from other groups and the general population, with a high prevalence of numerous lifestyle-related health risk factors.
The existence of the said group, comprising mostly men, can be related to the phenomenon of excess male mortality and the big difference (8 years) in the life expectancy between men and women in Poland.
The content and conditions of participation in health programmes indicate a need for better recognition of this problem by local authorities.
Most health policy programmes focus on providing inhabitants with free vaccination and complementing limited access to healthcare (mainly in terms of rehabilitation). *In* general, lifestyle-related health risks are rarely considered and always in the context of a specific disease combined with screening or therapeutic activity.
The recruitment criteria for programmes are formal (age and diagnosed medical problem). They do not consider the recipients’ education level or health culture—their attitude towards their health, which is expressed by, among other things, practicing various harmful behaviours.
People affected with multiple risk factors, mostly men aged about 50 with vocational education, cannot expect effective support under health policy programmes proposed by local governments.
One can expect that both the lifestyle-related differences discussed in this article and their health outcomes will be exacerbated in the future. These are side effects of the COVID-19 pandemic, when harmful behaviours intensified, especially among already affected people. At the same time, the number of proposed health programmes has significantly decreased.
The results of this study should contribute to improving health programmes to reduce the prevalence of behavioural risk factors and their co-occurrence. |
# Cognitive Function and Depressive Symptoms among Chinese Adults Aged 40 Years and Above: The Mediating Roles of IADL Disability and Life Satisfaction
## Abstract
The purpose of this study was to investigate the relationship between cognitive function and depressive symptoms among Chinese adults aged 40 years and above, as well as the series of multiple mediating effects of Instrument Activities of Daily Living disability and life satisfaction on this relationship. The data was obtained from the China Health and Retirement Longitudinal Study (CHARLS, 2013–2018), including 6466 adults aged 40 years and above. The mean age of the adults was 57.7 ± 8.5. The SPSS PROCESS macro program was conducted to examine the mediating effects. The results indicated that there was a significant association between cognitive function and depressive symptoms five years later (B = −0.1500, $95\%$CI: −0.1839, −0.1161), which could also be demonstrated through three mediation pathways: [1] the mediating pathway through IADL disability (B = −0.0247, $95\%$CI: −0.0332, −0.0171); [2] the mediating pathway through life satisfaction ($B = 0.0046$, $95\%$CI: 0.0000, 0.0094); and [3] the chain mediation pathway through IADL disability and life satisfaction (B = −0.0012, $95\%$CI: −0.0020, −0.0003). Both IADL disability and life satisfaction have been proven to be crucial mediators for the relationship between cognitive function and depressive symptoms five years later. It is necessary to improve individuals’ cognitive function and reduce the negative impact of disability on them, which is important to enhance their life satisfaction and prevent depressive symptoms.
## 1. Introduction
Depression is a common clinical mental disorder and is characterized by a persistent depressive mood [1,2], and it has become one of the most common medical illnesses [3,4]. It not only places a heavy burden on society because of long-term medicines and health services but also severely affects the health and quality of life of individuals [5,6]. A study reported that direct and indirect spending on treating major depression has been steadily increasing each year in the United States [7]. The prevalence of depressive symptoms was also quite common in China [8]. There were $2.2\%$ of males and $3.3\%$ of females in China suffering from major depressive disorders [9]. Wen et al. found that the incidence of depressive symptoms was as high as $22.3\%$ through a 4-year follow-up among Chinese adults [10]. As such, it is essential to identify the factors related to depressive symptoms and probe into the mechanism among these factors.
As part of the aging process, increasing age is often accompanied by a decline in cognitive function [11], which is characterized by decreased memory, attention, and reasoning ability [12]. The link between cognitive function and depression has attracted a lot of attention, and there are many studies that have proven the relationship between them. The relationship between cognitive function and depression is bidirectional. That is, depression affects cognitive function, and, conversely, cognitive decline can also lead to depression. For example, depression can accelerate brain aging and increase the risk of cognitive impairment [13] through peripheral and cerebral microvascular dysfunction [14]. At the same time, studies have demonstrated that cognitive decline reduces people’s learning and thinking ability and then affects all aspects of life, work, and social interaction, which could increase their psychological stress and even lead to depression or other mental illnesses [15]. Tatiana et al. found that cognitive decline might predict depressive symptoms among older Hispanic adults living in the community [16]. Archana et al. used dynamic change models and potential difference scores to find that memory performance related to cognitive function predicted the changes in depression two years later [17]. By establishing the relationship between cognitive impairment and mood, Jennifer et al. found that participants with mild cognitive impairment had increased odds of depressive symptoms, but participants without cognitive impairment had no change in the rates of depressive symptoms [18]. In China, Yang et al. found that people with cognitive decline have a higher incidence of depression [19]. A cohort study has shown that participants with cognitive impairment had poorer mental status and an increased risk of depression one year later [20]. Clinical studies involving younger and elderly individuals have also established the inverse relationship between cognitive function and depression [21,22,23]. Moreover, in terms of gender differences, females in their mid-to-late 40s will go through menopause, which is the time of life when women in their mid-to-late 40s experience 12 consecutive months of amenorrhea because of a loss of follicular activity [24]. Due to the relative deficiency of androgens, estrogen, and progestin, postmenopausal women may experience depression and cognitive decline, which severely impairs postmenopausal females’ quality of life [25,26]. In consequence, it is also worth considering that the effect of cognitive function on depressive symptoms seems to differ by gender.
Although previous studies have explored the relationship between cognitive decline and depression, the impact of individual physical and psychological changes following cognitive decline on depression is also worthy of attention. As one of the adverse physical consequences of cognitive decline [27,28], disability can be considered as a series of physical limitations that influences individuals’ daily social, recreational, and work activities, which is generally measured by the activities of daily living (ADL) scale or the instrumental activities of daily living (IADL) scale. A cross-sectional study of elderly people in China indicated that nearly one in five individuals had difficulties with ADL disability but two in five had difficulties with IADL disability. Most elderly people need help with IADL, such as bathing and shopping [29]. IADL generally involve the more complex and varied activities of daily living compared with ADL, which require multiple cognitive domains and cognitive flexibility to complete together [30]. A study suggests that the association between cognitive function and ADL depends substantially on IADL [31]. Moreover, hippocampal and cortical gray matter volumes are correlated with IADL [32], suggesting that cognitive decline contributes to the incidence of IADL disability. According to a study involving 10,898 Chinese people, one of the most common risk factors for males regarding IADL disability was cognitive impairment [33]. Therefore, we chose IADL disability, which was more closely associated with cognitive function, as one of the indicators in this study. Regarding whether disability affects depressive symptoms in adults, prior studies have shown that compared with individuals without disabilities, individuals with disabilities were at increased risk of onset depression [34,35]. By constructing a Back Propagation neural network model, Chinese scholars found that disability ranked fourth among the risk factors of depression among Chinese individuals aged 45 or older [36]. These findings strongly suggest that disability is not just a consequence of cognitive decline but is also a key predictive factor for depression. In terms of IADL disability, previous research has confirmed that people with worse IADL performance were more likely to develop depressive symptoms over time [37]. A nationally representative study has shown that depressive symptoms were associated with an increase in IADL disability among Latinos [38]. In China, Li et al. found that IADL disability was significantly associated with an increased incidence of depression among older adults in both males and females [39]. Decreased ability of IADL may be a precursor of depression [40]. Therefore, it is of interest to explore the effect of IADL disability on the relationship between cognitive function and depressive symptoms.
Life satisfaction is a subjective judgment process, which is often considered a fundamental dimension for measuring individuals’ quality of life [41]. Among the studies on the relationship between cognitive function and life satisfaction, previous research has shown that elderly people with cognitive decline had lower life satisfaction [42]. A national study on 10,081 elderly South Koreans showed that cognitive function was an important factor in life satisfaction [43]. In a longitudinal study, life dissatisfaction was found to be related to the development of mild cognitive impairment among older adults [44]. However, there are few reports about the association between cognitive function and life satisfaction among Chinese people, which is worth exploring. In terms of the relationship between disability and life satisfaction, research has demonstrated that people with ADL and IADL disabilities were negatively associated with life satisfaction. The loss of independence for daily living abilities, especially for IADL ability, would trigger a significant decline in perceptions of quality of life and a lower level of life satisfaction [45]. In addition, life satisfaction has been proven to be linked to mental disorders, such as depression [42,46]. Zhang et al. studied nationally representative data in China and found that compared with those who were satisfied with their lives, the elderly with lower life satisfaction were more than twice as likely to be depressed [47]. Scholars have also found that cognitive decline was related to disability incidence, which was more common among elderly people who were dissatisfied with their lives [48]. It is concluded that cognitive function, IADL disability, and life satisfaction are related to each other. Given the relationship between cognitive function, life satisfaction, and depressive symptoms, life satisfaction may mediate the relationship between cognitive function and depressive symptoms.
Exploring the effect of IADL disability and life satisfaction on the relationship between cognitive function and depressive symptoms is conducive to a better understanding of the relationship between cognitive function and depressive symptoms and its internal mechanism, which also provides a reference for prevention and intervention for depression after cognitive decline. This study aimed to assess the relationships between cognitive function, IADL disability, life satisfaction, and depressive symptoms five years later among Chinese adults aged 40 years and above. We proposed three hypotheses for this study: H1, Cognitive function can have an impact on depressive symptoms five years later; H2, IADL disability and life satisfaction may have an independent mediating effect on the association between cognitive function and depressive symptoms five years later; and H3, IADL disability and life satisfaction would have a serial mediation effect between cognitive function and depressive symptoms five years later. We used data from three waves of the China Health and Retirement Longitudinal Study (CHARLS) that was conducted in 2013, 2015, and 2018, respectively, to empirically test the serial multiple mediating effects of IADL disability and life satisfaction between cognitive function and depressive symptoms five years later. At the same time, the influence of gender differences on this study was also considered.
## 2.1. Data and Study Design
The data were freely obtained from three waves of the China Health and Longitudinal Retirement Survey (CHARLS) conducted in 2013, 2015, and 2018. The CHARLS is a national longitudinal survey implemented by the National School for Development (China Center for Economic Research), which was first performed in 2011, and the participants have been followed up every two years [49]. The survey covers 28 provinces, 150 county-level units, and 450 communities in China, including information about Chinese adults, such as demographic background, family structure, socioeconomic status, and health behaviors [50].
We ascertained each participant’s cognitive function at baseline in 2013, his/her condition of IADL disability and life satisfaction in 2015, and his/her depressive symptoms in 2018. For the time frame, we excluded the participants who had already developed IADL disability, dissatisfaction with life, and depressive symptoms at baseline, as well as the participants with memory-related disorders, such as Alzheimer’s disease, brain atrophy, and Parkinson’s disease. Using data from each time frame, we evaluated the associations among cognitive function, IADL disability, life satisfaction, and depressive symptoms.
At baseline in 2013, the total sample consisted of 18,612 participants. We excluded 4349 individuals who were lost to follow-up from 2013 to 2018. Meanwhile, 28 participants were excluded due to memory-related disorders, such as Alzheimer’s disease, brain atrophy, and Parkinson’s disease, while 36 participants under 40 years old were also excluded. We further excluded those who had already developed IADL disability ($$n = 3758$$), life dissatisfaction ($$n = 507$$), or depressive symptoms ($$n = 2046$$) at baseline in 2013. Then, 1422 participants without complete information on the core variables, such as IADL disability and life satisfaction or other covariates, were also excluded. The final number of participants aged 40 years and above who were available for the follow-up survey was 6466. The details are shown in Figure 1.
## 2.2.1. Cognitive Function
The cognitive function in the CHARLS [2013] was assessed by the TICS-10 (orientation and attention), word recall (episodic memory), and figure drawing (visual-spatial abilities) [51]. The TICS (Telephone Interview of Cognitive Status) included the serial subtraction of 7 from 100 (up to five times), date (day, month, and year), day of the week, and season of the year. The scores of the TICS-10 ranged from 0 to 10. Word recall was used to assess episodic memory. After being shown 10 Chinese nouns, the participants were asked to recall as many words as they could immediately (immediate memory), in any order, and to recall them again four to ten minutes later (delayed recall). The episodic memory score includes the average number of immediate and delayed word recalls and ranged from 0 to 10. In terms of visuospatial ability, the respondents were shown a picture of two overlapped pentagons and asked to draw a similar figure. The participants received a score of 1 if they drew it correctly and no score otherwise [52,53]. The overall score ranged from 0 to 21, with higher scores indicating better cognitive function.
## 2.2.2. IADL Disability
Disability in the instrumental activities of daily living (IADL) was described as dependence on at least one IADL task: doing housework, preparing meals, shopping, taking medication, managing money, and making a phone call [54]. The answers included 0 (no, I do not have any difficulty), 1 (I have difficulty but still can do it), 2 (yes, I have difficulty and need help), or 3 (I cannot do it). Those respondents were seen as dependent when they could not carry out the IADL scale activities independently (last three options) [55]. The total score ranged from 0 to 18, with higher scores indicating the more severe the dependence on the IADL item.
## 2.2.3. Life Satisfaction
Life satisfaction was assessed by one broad question: “How satisfied were you about your life?” The respondents rated based on a 5-point Likert scale in which the higher scores indicated lower levels of life satisfaction. Assessing life satisfaction with an intuitive single question is easier to understand and accept, especially for older adults, which has been used in previous research [56,57].
## 2.2.4. Depressive Symptoms
Depressive symptoms in the CHARLS were assessed by the 10-item short form of the Center for Epidemiologic Studies Depression Scale (CESD-10) [58]. Compared with the original CESD, the Chinese version of the CESD-10 also showed considerable accuracy in classifying the participants’ depressive symptoms (kappa = 0.84, $p \leq 0.01$) [58]. The CESD-10 comprised 10 questions about depression, and the answers included four options: 0 (rarely), 1 (some days; 1–2 days per week), 2 (occasionally; 3–4 days per week) and 3 (most of the time; 5–7 days per week) [59]. The total score ranged from 0 to 30, with a higher value indicating more depressive symptoms [60]. Individuals who scored more than 10 were identified as having depressive symptoms [61].
## 2.2.5. Demographic Characteristics
We also considered the demographic characteristics of the individuals from the baseline in 2013, including age (years), gender (male, female), marital status (not married, married), smoking (yes, no), drinking (yes, no), social activities (yes, no), physical activities (yes, no), chronic disease (inapplicable, no, one, two and above), and self-rated health (very healthy, healthy, general, unhealthy, very unhealthy).
## 2.3. Data Analysis
In this study, IBM SPSS Statistics version 24 was employed for analysis and processing. We used descriptive analysis to describe the general characteristics of the study population. t-tests or chi-squared tests were applied to compare the group differences in gender. The PROCESS macro (Model 6) designed by Hayes [62] was used to examine whether IADL disability and life satisfaction mediated the association between cognitive function and depressive symptoms five years later. We also stratified the entire sample by sex to explore whether this relationship still existed. Based on bias-corrected bootstrapping with 5000 samples, we set the bootstrap confidence interval (CI) at $95\%$. Bootstrap intervals are considered to be significant when the $95\%$CI does not contain zero [63].
## 3.1. Characteristics of Participants
As shown in Table 1, a total of 6466 participants aged 40 years or above were included in our study, and their mean age was 57.7 ± 8.5. The majority of the participants were married ($92.8\%$), didn’t smoke ($84.0\%$), and performed some physical activities ($89.4\%$) or social activities ($64.5\%$). A total of $7.6\%$ of the participants clearly knew they had more than one chronic disease, and $40.4\%$ of the participants drank. Among all the participants, only $6.9\%$ and $15.9\%$ had rated themselves as “very healthy” or “healthy”, respectively. In terms of gender, there were 3506 males and 2960 females, accounting for $54.2\%$ and $45.8\%$, respectively. The mean age for the males was 58.8 ± 8.4 and for the females was 56.4 ± 8.3. The results of the t-tests or chi-squared tests showed that compared with the males, the females were more likely to be married, smoke, have lower than moderate self-rated health status, and were less likely to drink. More detailed demographic characteristics are shown in Table 1.
## 3.2. Correlation between the Core Variables
Correlation analysis revealed that cognitive function was negatively correlated with IADL disability (r = −0.214, $p \leq 0.01$) and depressive symptoms five years later (r = −0.152, $p \leq 0.01$). Cognitive function was positively correlated with life satisfaction ($r = 0.026$, $p \leq 0.05$). IADL disability ($r = 0.152$, $p \leq 0.01$) and life satisfaction ($r = 0.147$, $p \leq 0.01$) were positively correlated with depressive symptoms five years later. IADL disability ($r = 0.039$, $p \leq 0.01$) was positively correlated with life satisfaction (Table 2).
## 3.3. Mediating Effect Analyses
To further elucidate the underlying mechanisms by which cognitive function is associated with depressive symptoms five years later, we explored the mediating roles of IADL disability and life satisfaction in this relationship. All the analyses in this study were conducted on the basis of adjusting for all the demographic characteristics. The analysis results are shown in Table 3 and Figure 2. Cognitive function had a significant and negative association with depressive symptoms five years later (B = −0.1712, $95\%$CI: −0.2050, −0.1374). Cognitive function had a significant and negative association with IADL disability (B = −0.0655, $95\%$CI: −0.0747, −0.0564). IADL disability had a significant and positive association with depressive symptoms five years later ($B = 0.3765$, $95\%$CI: 0.2874, 0.4655). Cognitive function had a significant and positive association with life satisfaction ($B = 0.0048$, $95\%$CI: 0.0003, 0.0094). Life satisfaction had a significant and positive association with depressive symptoms five years later ($B = 0.9587$, $95\%$CI: 0.7771, 1.1402). When controlling for IADL disability and life satisfaction, cognitive function was still negatively correlated with depressive symptoms five years later, although the coefficient decreased (B = −0.1500, $95\%$CI: −0.1839, −0.1161).
In addition, Table 3 presents the total and direct effects of cognitive function on depressive symptoms five years later and the mediating effect of IADL disability and life satisfaction. The results demonstrated that the total and direct effects of cognitive function on depressive symptoms five years later were −0.1712 and −0.1500, respectively. When IADL disability and life satisfaction were modelled as mediators, respectively, the path coefficients of cognitive function on depressive symptoms five years later indicated that IADL disability and life satisfaction had a significant mediating effect (Indirect effect1 = −0.0247, $95\%$CI: −0.0332, −0.0171; Indirect effect2 = 0.0046, $95\%$CI: 0.0000, 0.0094). In addition, IADL disability and life satisfaction played a serial mediating role in the association between cognitive function and depressive symptoms five years later (Indirect effect 3 = −0.0012, $95\%$CI: −0.0020, −0.0003). Therefore, three types of mediating effects were found in the relationship between cognitive function and depressive symptoms five years later: first, the mediating effect of IADL disability (effect = −0.0247); second, the mediating effect of life satisfaction (effect = 0.0046); and third, the serial mediating effect of IADL disability and life satisfaction (effect = −0.0012). All the results confirmed the hypothesis we made at the beginning of the study.
## 3.4. Gender Differences
With respect to gender differences, the full sample was divided into male ($$n = 3506$$) and female ($$n = 2960$$) groups for the mediating effect analyses. As shown in Table 3, IADL disability and life satisfaction partially mediated the relationship between cognitive function and depressive symptoms five years later for females. Additionally, the indirect roles of IADL disability and life satisfaction were also significant, respectively. However, for males, there is only one significant mediation path: cognitive function→ IADL disability→ depressive symptoms, which means that IADL disability was a mediator in the relationship between cognitive function and depressive symptoms five years later.
## 4. Discussion
Based on the national longitudinal dataset from CHARLS (2013, 2015, and 2018), we explored the relationship between cognitive function and depressive symptoms five years later among Chinese individuals aged 40 years and older and formulated a mediation model to examine the underlying mechanisms behind this specific association. The results showed that cognitive function is significantly associated with depressive symptoms five years later. In other words, cognitive decline is a risk factor for future depressive symptoms. Disability and life satisfaction play partial mediating roles and a serial mediation role in the relationship between cognitive function and depressive symptoms five years later.
The results suggested that cognitive function is significantly associated with subsequent depressive symptoms five years later, which is in accordance with previous studies [64,65]. This means that the worse the cognitive function, the higher the risk of depressive symptoms in the future. Several longitudinal studies have provided evidence that cognitive decline precedes the onset of depressive symptoms [66]. Clinically speaking, cognitive impairment has several pathophysiological mechanisms, such as disturbances in the hypothalamic–pituitary–adrenal axis and abnormalities in brain-derived neurotrophic signaling [67], which as risk factors might lead to increased chances of future depressive symptoms. *In* general, the risk of depression is most commonly diagnosed in relation to cognitive decline, such as memory lapses, slower thoughts, and confusion [68]. At the same time, people with cognitive decline experience depressive symptoms, which can be interpreted as a psychological response. In other words, depression can be conceptualized as a kind of psychological reaction to the perception of cognitive decline [69]. In addition, cognitive impairment may also make individuals more susceptible to cognitive distortions (e.g., unrealistic expectations, hyper-responsive to external stimulation), which can impair peoples’ regulatory emotions and further lead to depression [70,71].
After exploring the internal mechanism of the relationship between cognitive function and depressive symptoms five years later, we demonstrated that the indirect effect of this association can be mediated by IADL disability and life satisfaction, respectively. On the one hand, the results revealed that better baseline cognitive performance reduced the incidence of future IADL disability, which is consistent with previous findings that participants with impaired cognition were less likely to be independent [72,73]. A systematic review and meta-analysis established that IADL disability existed over a continuous course of cognitive decline [74]. Cognitive decline can affect people’s operational skills and fine control ability through neuropathological damage, resulting in IADL disability [75] and leading to losses of independence and productivity. When people become aware of the various adverse effects of cognitive decline on their daily life, such as the inconvenience of life and behaviors, it will break the psychological balance to cause many individuals obvious psychological burdens and will bring a series of depressive symptoms in the future [76]. On the other hand, life satisfaction played a mediating role between cognitive function and depressive symptoms. Interestingly, contrary to previous studies [77,78], we found that cognitive decline actually increased people’s life satisfaction, which in turn reduced the risk of developing depressive symptoms. With aging, there is a gradual decline in cognitive function among some people. Correspondingly, they may receive more material and emotional help from friends and relatives, which may prevent them from experiencing more negative emotions, improve their life satisfaction, and, thus, reduce the development of depressive symptoms [79]. Furthermore, the relevant policy guarantees and medical services for people with cognitive disorders provided by the government and departments also make them feel the care and support from society to a large extent [80,81], which will also improve their quality of life and life satisfaction to effectively prevent the occurrence of depression.
We also found that IADL disability and life satisfaction played partial mediating roles in the relationship between cognitive function and depressive symptoms five years later. In detail, baseline cognitive decline was significantly associated with future IADL disability and then reduced life satisfaction, which was in turn related to depressive symptoms in the future. Poor baseline cognitive ability increases the incidence of future IADL disability [27,82]. Adverse outcomes of IADL disability, such as social withdrawal, lack of energy/interest, and decreased self-efficacy, have been identified as strong predictors of reduced life satisfaction. Meanwhile, a large number of studies have shown that lower life satisfaction is an effective indicator of an individual’s exposure to significant depressive symptoms [83]. Compared with the general population, individuals with life dissatisfaction are more likely to have depressive symptoms and other mental health problems [84]. Therefore, IADL impairment caused by cognitive decline renders most adults unable to perform their social roles and daily life normally, thus affecting their life satisfaction [85]. To a certain extent, this will cause personal psychological distress that is difficult to adjust to and may even develop into depression in severe cases [86].
From the perspective of gender difference, our findings showed that IADL disability and life satisfaction played a chain mediating role between cognitive function and depressive symptoms five years later in females, while for males, only IADL disability had a significant mediating effect, which may be due to personality and biological differences between males and females. Moreover, menopause seems to expose women to the odds of cognitive impairment due to changes in sex hormone levels. The decline in cognitive function may interfere with an individual’s activities of daily living [87]. Increased sensitivity to hormonal changes in some menopausal women makes them more susceptible to the negative emotions associated with cognitive decline and IADL disability, leading to lower life satisfaction and an increased risk of depressive symptoms.
Finally, there are some limitations in this study. Firstly, the assessments of variables through self-report questions or a single item may have led to results bias and a lack of sensitivity [88], which made it difficult to detect subtle changes between the samples and also resulted in significant but very small correlations between some variables. In order to make the research results more convincing, future studies should try to introduce more objective and rich measurement methods to provide multidimensional information about the participants’ related indicators. Secondly, the dependent variable of depressive symptoms in this study was continuous. In order to explore the association between cognitive function and future depressive symptoms and its internal mechanism more clearly, clinical diagnosis results and more complex psychological tests should be combined, with the incidence of major depression as the endpoint for in-depth analysis. Thirdly, all the variables were collected through three waves of data from different years. This was neither a cross-sectional nor longitudinal study in nature, so in order to make the findings more convincing, longitudinal research should be conducted to analyze the changes regarding the relationship between cognitive function and future depressive symptoms over time and their causality.
## 5. Conclusions
This study provides evidence of the association between cognitive function and depressive symptoms five years later among Chinese individuals aged 40 years and older, and the support for the sequential mediating effects of IADL disability and life satisfaction between this relationship was confirmed. Future studies on this topic should reconsider and scrutinize in more depth the relationship between cognitive function and depressive symptoms while considering the differences in other factors. It is necessary to improve cognitive function and reduce the negative impact of disability on individuals, especially females, which is very important to enhance their life satisfaction and prevent depressive symptoms. |
# Intensity of Depression Symptoms Is Negatively Associated with Catalase Activity in Master Athletes
## Abstract
Background: This study examined associations between scores of depression (DEPs), thiobarbituric acid-reactive substances (TBARS), superoxide dismutase (SOD), and catalase activity (CAT) in master athletes and untrained controls. Methods: Participants were master sprinters (MS, $$n = 24$$; 50.31 ± 6.34 year), endurance runners (ER, $$n = 11$$; 51.35 ± 9.12 year), untrained middle-aged (CO, $$n = 13$$; 47.21 ± 8.61 year), and young untrained (YU, $$n = 15$$; 23.70 ± 4.02 year). CAT, SOD, and TBARS were measured in plasma using commercial kits. DEPs were measured by the Beck Depression Inventory-II. An ANOVA, Kruskal-Wallis, Pearson’s, and Spearman’s correlations were applied, with a significance level of p ≤ 0.05. Results: The CATs of MS and YU [760.4 U·μL 1 ± 170.1 U·μL 1 and 729.9 U·μL 1 ± 186.9 U·μL 1] were higher than CO and ER. The SOD levels in the YU and ER [84.20 U·mL−1 ± 8.52 U·mL−1 and 78.24 U·mL−1 ± 6.59 U·mL−1 ($p \leq 0.0001$)] were higher than CO and MS. The TBARS in CO [11.97 nmol·L−1 ± 2.35 nmol·L−1 ($p \leq 0.0001$)] was higher than in YU, MS and ER. MS had lower DEPs compared to the YU [3.60 ± 3.66 vs. 12.27 ± 9.27 ($$p \leq 0.0002$$)]. A negative correlation was found between CAT and DEPs for master athletes [r = −0.3921 ($$p \leq 0.0240$$)] and a weak correlation [r = −0.3694 ($$p \leq 0.0344$$)] was found between DEPs and the CAT/TBARS ratio. Conclusions: In conclusion, the training model of master sprinters may be an effective strategy for increasing CAT and reducing DEPs.
## 1. Introduction
Master athletes are those over 35 years old who maintain a training routine, compete in national and/or international sports championships, and represent a distinct portion of the middle-aged and elderly population [1]. These master athletes undergo intense physical training routines (from 3 to 6 sessions per week, totaling approximately 10 h or more of weekly training). The training programs of elite masters sprinters, for example, are mainly characterized by high-intensity, low-volume sessions based mainly on anaerobic pathways [1,2]. Additionally, the weekly training of such sprinters usually includes two sessions for speed (i.e., short sprints and long sprints for speed endurance), strength (i.e., weight lifting), power (i.e., plyometric exercises), and stretching and flexibility. A few low-volume, moderate-intensity cardiovascular sessions can also be performed between high-intensity anaerobic training sessions. Otherwise, the training programs of elite master endurance athletes are based mainly on low-intensity, high-volume sessions, relying primarily on aerobic pathways. Exercise modes and workloads are selected individually depending on the athlete’s training season [1,2]. Several studies have shown that master athletes have better physical performance, body composition, lipid profile, blood glucose control, and attenuated biological aging when compared to untrained peers, and have thus been referred to as a healthy aging model [3,4].
The healthy aging process of master athletes may be related to several biochemical mechanisms, including lower levels of oxidative stress and increased antioxidant defense [5]. A decreased antioxidant defense, on the other hand, has been identified as a part of the pathogenesis of depression, a multifactorial disease linked to the aging process [6]. In situations in which the antioxidant system is impaired, reactive oxygen species can damage lipids, proteins, and DNA. In addition, pro-inflammatory cytokines (IL-6, TNF-ӱ) increase the activity of indoleamine 2,3-dioxygenase (IDO), an enzyme involved in the synthesis of kynurenine from tryptophan. Kynurenine in turn appears to have potential neurotoxic action, since kynurenine is transformed into 3-monooxygenase (KMO), forming kynurenine into 3-hydroxykynurenine and 3-hydroxyanthranilic, precursors of quinolinic acid. This acid is considered a metabolite that leads to excitotoxicity for the central nervous system and induces oxidative stress. Thus, some studies have shown that catalase seems to block the toxicity generated by 3-hydroxykynurenine [7,8,9].
Antioxidant defense, in turn, protects cells by removing free radicals. This antioxidant system comprises different types of functional components, such as superoxide dismutase (SOD) and catalase (CAT) [10]. SOD acts as a primary cellular defense against free radicals since it catalyzes the reduction of SO to oxygen and hydrogen peroxide. CAT is an antioxidant enzyme present in almost all aerobic organisms. Its function is to break two molecules of hydrogen peroxide into one molecule of oxygen and two molecules of water [10,11]. In our previous studies, we have shown that master athletes have greater catalase activity than their non-athlete peers, in addition to other antioxidant enzymes such as superoxide dismutase (SOD) [3,4].
However, findings on catalase and its relationship with the intensity of depression symptoms (DEPs) are still inconsistent. According to Tsai and Huang [12], catalase activity is increased in patients in the acute phase of depression. On the other hand, in a meta-analysis by Jimenez-Fernandez [13], the differences in catalase levels among depressed and non-depressed people were not significant.
Furthermore, the substance reactive to thiobarbituric acid (TBARS) is an enzyme that is abundant in the depressive process. TBARS is the main method to quantify the end products of lipid peroxidation, being considered a pro-oxidant enzyme used to measure the oxidative stress of tissues and cells [14]. This oxidative stress is defined as an imbalance between pro- and antioxidant molecules.
To the best of our knowledge, no research has been conducted on the relationship between catalase activity, oxidative stress, and the intensity of depression symptoms in people who have followed a training regimen their entire lives, such as master runner athletes. Therefore, we aimed to analyze catalase, oxidative stress, and the intensity of depression symptoms in master athletes, their non-athlete peers, and a young control group. We hypothesized that master athletes have higher catalase activity and lower intensity of depression symptoms when compared to their non-athlete peers and the youth control group. It is also hypothesized that there is a negative correlation between catalase activity and the intensity of depression symptoms.
## 2.1. Ethical Approval
The study was approved by the Human Research Ethics Committee. All procedures were carried out according to the principles of the Declaration of Helsinki ($\frac{466}{2012}$). All subjects who agreed to participate in the study provided written informed consent, which had been clearly explained before participation.
## 2.2. Participants
The total sample ($$n = 63$$) was composed of 35 elite male master athletes at regional, national, and international levels and 28 untrained individuals. The master athletes were subdivided into master sprint athletes (MS, $$n = 24$$) from the 100 m, 200 m, 400 m, and 110 m hurdles, among others, and endurance runners (ER, $$n = 11$$) from 5 km to marathons and triathletes. The control groups consisted of young untrained (UY, $$n = 15$$) and middle-aged untrained controls (CO, $$n = 13$$). The youth sample was entirely collected in Brazil. These were mostly single college students. Master athletes were recruited from participants in the Brazilian Master Athletics Championship (São Bernardo do Campo, Brazil, 2018), Grandprix Del Mercosur (Montevideo, Uruguay, 2019), and World Master Indoor Athletics Championship (Torún, Poland, 2019). The inclusion criteria for master athletes were: [1] systematic training for at least 10 years; and [2] active participation in national and/or international competitions until the date of data collection. The non-athlete subjects of the control group (young and middle-aged) were recruited through pamphlets and electronic advertisements in the city of Brasília-DF, Brazil, and met the inclusion criteria of not being trained and being healthy. The exclusion criteria for all participants were: [1] a history of cardiometabolic diseases; [2] a history of inflammatory disease and cancer; [3] a smoker; and [4] regular drug use, including hormone replacement therapy.
## 2.3. General Procedures
Data were collected in the laboratory between 7 and 9 am, and all volunteers had not exercised in the previous 12 h and had fasted for at least 8 h. The collection protocol consisted of (a) anamnesis, to collect data referring to the health history and history of training and/or physical activity; and (b) an assessment of the intensity of depression symptoms, for which data were collected using the Beck Depression Inventory-II (BDI-II). The instrument has 21 items, and for each of them, there are four response statements, among which the subject chooses the most applicable to describe how she has been feeling in the last two weeks, including the test date [15]. These items refer to levels of intensity of depression symptoms, and the total score is the result of the sum of the individual items, reaching a maximum of 63 points. The final score is classified into minimal, mild, moderate, and severe levels, thus indicating the intensity of depression. The questionnaires were administered one day before the competitions at the athletes′ accommodations, which were usually close to the competition venue. The same researcher performed all of these assessments; and (c) collection of venous blood from the antecubital vein using a 4 mL vacutainer (with EDTA), with blood gradient centrifugation (Sirius 4000, Sieger, Brazil) for 15 min at 3800 rpm for plasma and serum isolation, and storage in a freezer (−80 °C) for further plasma analysis of catalase, superoxide dismutase (SOD), and TBARS.
## 2.4. Antioxidant Parameters
The three antioxidant parameters used in this study were measured using commercial kits and following the manufacturer′s protocol. The SOD activity was measured using the SOD assay kit (Sigma Aldrich®, California, USA), with a final spectrophotometric reading at 450 nm; the CAT activity was measured using the Amplex TM Red Catalase assay kit (Thermofisher Scientific®, California, USA), with a final spectrophotometric reading after one minute of incubation at 560 nm.
## 2.5. Lipid Peroxidation (TBARS)
The protocol used in the present study is adapted from Ohkawa et al. [ 1979]. Briefly, serum samples were diluted in 320 μL MiliQ H2O (1:5) and added 1 mL of trichloroacetic acid (TCA) $17.5\%$, pH 2.0, following the addition of 1 mL of thiobarbituric acid (TBA) $0.6\%$, pH 2.0. After homogenization, the samples were kept in a water bath for 30 min at 95 °C. The reaction was interrupted with the immersion of the microtubes in ice and the addition of 1 mL of TCA $70\%$, pH 2.0, and another incubation for 20 min at room temperature. After centrifugation (3000 rpm for 15 min) the supernatant was removed to new microtubes and taken to spectrophotometry reading at 540 nm. The concentration of lipid peroxidation products was calculated using the molar extinction coefficient equivalent for malondialdehyde (MDA − equivalent = 1.56 × 10 5 M − 1 cm − 1).
## 2.6. Statistical Analysis
The data were analyzed for normality and homogeneity using the Shapiro-Wilk test and the Levene test, respectively. The data were expressed as mean, standard deviation (±), minimum, $25\%$ percentile, median, $75\%$ percentile, and maximum. A one-way ANOVA followed by Tukeys post hoc was applied for comparisons among studied groups for age, catalase and Tbars variables. Kruskal-Wallis with Dunn’s test of multiple comparisons was applied to compare the groups on depression and SOD variables. The Spearman coefficient correlation was used to verify the association between catalase activity and the intensity of depression symptoms. The significance level was set at $5\%$ (p ˂ 0.05), and all procedures were performed using GraphPad Prism (v7.0, California, USA). In order to assess the clinical importance of results, the effect size was calculated and classified either as small ($r = 0.2$ to 0.49), moderate ($r = 0.5$ to 0.79) or large (r ˃ 0.8) [15]. The sample size for a priori statistical power of $80\%$ (1 − β = 0.80) indicated 20 participants for a significance level of $5\%$ (α = 0.05) and small effect size ($f = 0.4$). Thus, we chose a sample of 80 subjects (20 for each studied group) [16].
## 3. Results
The characterization of the sample, the intensity of depression symptoms, the CAT, the SOD level, and the TBARS are expressed in Table 1 as mean and standard deviation. The intensity of depression symptoms in the YU (12.27 ± 9.27) was higher than in the MS group (3.60 ± 3.66; $$p \leq 0.0002$$) and CO (4.61 ± 2.56; $$p \leq 0.002$$). The CAT of MS and YU [760.4U · μL 1 ± 170.1 U·μL−1 and 729.9 U · μL 1 ± 186.9 U · μL 1] were higher than CO and ER [410.3 U · μL 1 ± 67.24 U · μL 1 and 528.8 U · μL 1 ± 103.2 U · μL−1 ($p \leq 0.0001$)]. The SOD level in the YU and ER was higher than CO and MS [84.20 U·mL−1 ± 8.52 U·mL−1 and 78.24 U·mL−1 ± 6.59 U·mL−1 ($p \leq 0.0001$)]. The TBARS in CO [11.97 nmol·L−1 ± 2.35 nmol·L−1 ($p \leq 0.0001$)] was higher than in YU, MS, and ER (Table 1).
Furthermore, a negative correlation was found between CAT and the intensity of depression symptoms for the entire group of master athletes [r = −0.3921 ($$p \leq 0.0240$$)] (Figure 1).
On the other hand, the CAT/TBARS ratio has a negative correlation with symptoms of depression [r = −0.3694 ($$p \leq 0.0344$$)] (Figure 2).
The SOD/TBARS ratio was not correlated with the intensity of depression symptoms [$r = 0.1439$ ($$p \leq 0.4319$$)] (Figure 3).
The relationships between the intensity of depression symptoms and SOD [$r = 0.3320$ ($$p \leq 0.06$$)] and TBARS [$r = 0.0900$ ($$p \leq 0.61$$)] were not statistically significant.
## 4. Discussion
This was the first study to assess the intensity of depression symptoms and their relationship with TBARS, SOD, and CAT activity in master athletes. Our main findings were that: (i) the young control group presented greater intensity of depression symptoms in comparison with both the master athletes from sprints and the middle-aged untrained control group; (ii) the young control group did not differ from the endurance runners in terms of intensity of depression symptoms; (iii) CAT activity was negatively associated with the intensity of depression symptoms in master athletes; (iv) the CAT/TBARS ratio was a negative correlation with symptoms of depression.
There is an increase in the prevalence of depression in young adults (18 to 25 years old), especially during the college period. One of the possible explanations for the phenomenon is that college students, when seeking academic performance, start to neglect their time, their social relationships, and their well-being, and, as a consequence, they also reduce their levels of physical activity [17,18,19,20]. All these changes can generate instability that can, therefore, contribute to the reduction of social support and increase in stress, which are known to contribute to the emergence of mental disorders [21].
On the other hand, while depression is among the most prevalent age-related mental conditions, the literature places the master athlete as a model of healthy aging, as long as he has a balanced lifestyle with healthy eating, stress control, and regular exercise for many years [3,4,22,23]. In this regard, our findings revealed that master sprint athletes have lower levels of intensity depression than untrained young people. Previously, it was evidenced in a meta-analysis that high-intensity neuromuscular training is more effective in reducing the intensity of depression symptoms when compared to aerobic exercise [24]. It is important to note that our master sprint athletes require more intense neuromuscular solicitations than our endurance athletes.
In this regard, neuromuscular/resistance training would increase the release of brain-derived neurotrophic factor (BDNF) from muscle contraction, reaching the brain and activating multiple signaling pathways, starting to regulate the expression of antioxidant molecules [25,26]. In addition, BDNF participates in the pathophysiological mechanism of depression. Since there is signaling for an increase in NF-kB, this would increase oxidative stress, causing an increase in pro-inflammatory cytokines (IL-1 and IL-6) and a decrease in BDNF, resulting in a decrease in brain cell neurogenesis [27].
Furthermore, Schuch et al. [ 2014] demonstrated the effects of 3 weeks of physical exercise in severely depressed hospitalized patients; those who had a decrease in TBARS levels after the exercise protocol was applied [28]. This result is in line with the present, which demonstrated lower TBARS levels in master sprint and endurance athletes when compared to the middle-aged group, confirming a possible adjuvant antioxidant effect in combating the intensity of depressive symptoms in this population.
However, the findings on the activity of antioxidant enzymes and depression are controversial. Increased activities have been detected in some studies, but on the other hand, several studies have published mixed or negative results for catalase activity in depression compared to healthy control groups [12,26,29].
Catalase (CAT) is an enzyme that catalyzes the breakdown of hydrogen peroxide into water and oxygen, mediating signaling in cell proliferation, apoptosis, carbohydrate metabolism, and platelet activation [30]. Humans with low catalase levels are at increased risk for diabetes and altered lipid and carbohydrate metabolism [31]. Some studies that have examined catalase activity in depressed patients have found increased levels of catalase activity during acute episodes of depression compared to healthy volunteers [31]. Szuster-Ciesielska et al. [ 32] also detected increased serum catalase activity in patients with major depression. The increase in catalase activity may reflect a compensatory mechanism since, during depressive disorders, there is an increase in oxidative and nitrosative stress (O&NS) pathways. The catalase would be increased to attenuate the induced O&NS pathways and is congruent with the role of oxidative free radical signaling [32].
On the other hand, some clinical studies reported a decrease in catalase activity during depressive episodes [31]. According to a study by Bhatt et al., mild and chronic stress led to decreased levels of catalase in the brain tissues of stressed mice; however, treatment with antidepressants had beneficial effects and increased catalase levels in these mice [31]. Additionally, catalase overexpression improves memory and reduces anxiety symptoms even in the absence of altered oxidative stress, and antidepressant treatment appears to increase levels of this antioxidant enzyme in patients with depression [29,32]. Correspondingly, the same occurs concerning physical exercise; Sousa et al. [ 33] demonstrated in a meta-analysis that physical exercise seems to promote increased antioxidant defense. Similarly, our study showed an increased activity in antioxidant defenses, mainly catalase, and a negative correlation between the intensity of depression symptoms and catalase activity in master athletes.
## Limitations
Possible limitations of this study may include that we did not measure inflammatory indicators. However, the correlation between depression and inflammation is already well described in the scientific literature. Despite this, we studied a group of high-level master athletes with a track record of long-term sprint and endurance training and success in national and/or international championships. Thus, to the best of our knowledge, this is the first study evaluating and comparing the intensity of depression symptoms and antioxidant parameters in elite master athletes, middle-aged, and young individuals with no lifelong training history.
## 5. Conclusions
In conclusion, master sprinters presented the lowest intensity of depression symptoms, with CAT being higher than CO and ER. CAT and the CAT/TBARS ratio were negatively associated with the intensity of depression symptoms, suggesting that the training model of master sprinters may be more effective in increasing CAT and reducing depressive symptoms. As a general recommendation, the lifestyle of master athletes, which is mainly characterized by, but not limited to, a lifetime of exercise training, seems to promote a better antioxidant defense system, favoring the redox balance. A better antioxidant defense system is related to a lower intensity of depressive symptoms and attenuates the aging process, as documented in several previous studies [3,4,32]. Thus, individuals seeking these benefits should exercise chronically at proper doses according to their preferences, in addition to maintaining a lifestyle with other healthy habits such as a balanced diet, proper sleep, and stress management. |
# Effects of Virtual Reality Exercise Program on Blood Glucose, Body Composition, and Exercise Immersion in Patients with Type 2 Diabetes
## Abstract
Background: This study is a preliminary study to examine the effect of a virtual reality exercise program (VREP) on type 2 diabetes patients. Method: *This is* a randomized controlled trial for patients with type 2 diabetes (glycated hemoglobin ≥ $6.5\%$), diagnosed by a specialist. The virtual reality environment was set up by attaching an IoT sensor to an indoor bicycle and linking it with a smartphone, enabling exercise in an immersive virtual reality through a head-mounted display. The VREP was implemented three times a week, for two weeks. The blood glucose, body composition, and exercise immersion were analyzed at baseline, and two weeks before and after the experimental intervention. Result: After VREP application, the mean blood glucose ($F = 12.001$ $p \leq 0.001$) and serum fructosamine ($F = 3.274$, $$p \leq 0.016$$) were significantly lower in the virtual reality therapy (VRT) and indoor bicycle exercise (IBE) groups than in the control group. There was no significant difference in the body mass index between the three groups; however, the muscle mass of participants in the VRT and IBE groups significantly increased compared with that of the control ($F = 4.445$, $$p \leq 0.003$$). Additionally, exercise immersion was significantly increased in the VRT group compared with that in the IBE and control groups. Conclusion: A two week VREP had a positive effect on blood glucose, muscle mass, and exercise immersion in patients with type 2 diabetes, and is highly recommended as an effective intervention for blood glucose control in type 2 diabetes.
## 1. Introduction
Diabetes is one of the major chronic diseases, with 463 million people reportedly affected by it worldwide in 2019, which is expected to gradually increase to 578 million by 2030 and 700 million by 2045 [1]. According to the diabetes fact sheet in Korea, 2020, the prevalence of diabetes in South Korea, in individuals over the age of 30 years, has increased from $11.1\%$ in 2013 to $13.8\%$ in 2018. Patients with diabetes often have concomitant obesity, hypertension, and hyperlipidemia, which increases the socioeconomic burden and lowers the quality of life of the individuals; thus, diabetes management is crucial [2]. The main treatment goals for patients with diabetes are to maintain normal blood glucose levels and prevent acute and chronic complications [3]. Treatment is largely divided into drug therapy and lifestyle modification, among which lifestyle modification can be classified into diet and exercise therapy, allowing patients to self-monitor and prevent complications. However, the practice rate of diet and exercise therapy is still quite low [4]. Exercise therapy increases insulin sensitivity, decreases fasting and postprandial blood glucose levels, reduces cardiovascular risk factors and weight, and ensures the well-being of patients [5,6,7,8].
However, only approximately $36\%$ of patients with diabetes in South Korea engage in regular physical activity [4]. In particular, social distancing and isolation measures, attributed to the recent SARS-CoV-2 virus (COVID-19), have further reduced physical activity [9]. Therefore, a new and safe method for encouraging patients with diabetes to continue performing exercise therapy is warranted. Therefore, this study aimed to devise a new exercise therapy for increasing the exercise practice rate of patients, by rapidly exhibiting the exercise benefits, thereby enhancing exercise immersion and providing a short-term intervention effect.
Cycling is a representative aerobic exercise that can be easily performed. In particular, indoor cycling does not require a lot of space, and can be performed at any time regardless of the weather, time, and season, and can be performed in the current COVID-19 situation. It has the advantage of being able to set an appropriate exercise intensity for each individual, by measuring real-time speed, distance, exercise time, and calorie consumption, and by adjusting the resistance and rotation speed of the wheel [10]. However, indoor cycling can be boring, since it is performed alone in a fixed place, which may hinder engagement in continuous and repeated exercise [11]. Applications using virtual reality (VR) are being developed to compensate for these shortcomings, by increasing the interest and fun of indoor exercise and motivating individuals through a sense of achievement [12].
VR is a technology that ensures realistic real-world experience, by stimulating the individual by creating an environment that is difficult or impossible to obtain in reality, using artificial technology [13]. A head-mounted display (HMD), an immersive VR device, is used in games, movies, education, and training, with images and immersive sound, that provide 360° visual immersion. The sense of reality provided by VR enhances exercise ability by sustaining immersion [14], thereby having a positive effect on participation and learning ability, by improving concentration [15]. Recently, in the health care field, treatment using VR has been reported to be effective in improving motor performance, cognitive function, and fall prevention in patients with Parkinson’s disease and stroke [16,17,18,19,20].
However, among the previous studies that applied VR, a scarcity of studies applying VR to patients with type 2 diabetes exists. Therefore, our study attempted to determine the effects on blood glucose, body composition, and exercise immersion, by applying a 2-week VR exercise program (VREP) in patients with type 2 diabetes.
## 2.1. Study Design
This study was a randomized controlled trial, measuring the effects of a 2-week VREP on the blood glucose, body composition, and exercise immersion in patients with type 2 diabetes (Figure 1).
## 2.2. Participants
Patients were recruited via a recruitment notice at the University of Eulji, University Hospital. The inclusion criteria were, patients between 30 and 65 years of age and diagnosed with type 2 diabetes (glycated hemoglobin ≥ $6.5\%$), who had not participated in any exercise research program in the last 6 months, could use a smartphone, understood the study, and consented to participation. Exclusion criteria were, those with diabetic peripheral neuropathy, diabetic retinopathy, visual impairment, previous lower extremity joint surgery, stroke, severe arthritis, or dizziness.
The sample size was calculated by substituting α value, power, and effect size using the calculation program G-Power 3.1.9.4 (Heinrich Heine University, Germany) [21]. Upon calculating the sample size, through repeated measures analysis of variance (RM ANOVA), with an effect size set at 0.24, significance level at 0.05, power(1-β) at 0.80, number of groups at three each, and correlation coefficient at 0.5, based on a previous study on diabetes [22], the total sample size derived was 39; the study was conducted on 45 participants, considering a dropout rate of $10\%$. The function of the Microsoft Excel program randomly assigned 15 participants to each of three groups. During the study, one participant in the VR therapy (VRT) group refused to participate in the experiment, owing to dizziness during exercise after wearing an HMD. Two participants in the indoor bicycle exercise (IBE) group dropped out, owing to COVID-19 self-quarantine, and one participant in the control group dropped out, due to hospitalization for surgery. Thus, 14, 13, and 14 participants in the VRT, IBE, and control groups, respectively, were included in this study (Figure 2).
## 2.3. Experimental Intervention
The exercise program was developed according to the recommendations of the American College of Sport Medicine [23], based on the type and intensity of exercise, after consulting an internal medicine doctor, a professor of nursing, and a sports therapist. An indoor bicycle with easy access to exercise was selected for this study, and the exercise intensity and duration were changed from low to medium intensity, for 40–60 min, considering patients with type 2 diabetes. Since 3 days of resistance exercise and 3 days of aerobic exercise are recommended per week, and the National Academy of Sports Medicine also recommends at least 2 days of resistance exercise and 3 days of aerobic exercise per week [24], our study prescribed 3 days of exercise per week. When exercising with the indoor bicycle, the exercise intensity was set at a low level (gear two, of gears one to ten), which was adjusted to the intensity that would render participants “slightly out of breath” [25]. The exercise program consisted of warm-up, main, and cool-down exercises. First, warm-up stretching was performed before initiating the exercises, which relieved heart and muscle stimulation and improved the exercise capacity, by improving blood flow. The indoor cycling exercise was performed as the main exercise, followed by cool-down stretching, which accelerated the decomposition of lactic acid accumulated in the blood, after the end of the main exercise, to help recover from fatigue [26].
The exercise program was scheduled at a comfortable time for the participants, to ensure exercise three times a week. After checking the respiratory symptoms and fever of the patients, and disinfecting their hands, the CGM Libre sensor was tagged with a smartphone to check their blood glucose. Thereafter, the participants participated in the exercise program according to the explanation and demonstration provided by the authors. After the exercise program, the CGM Libre sensor was tagged with a smartphone to check participants’ blood sugar.
The VREP was applied to the VRT group for a total of 50 min, which included 10, 30, and 10 min of warm-up, main, and cool-down exercises, respectively; the main exercise consisted of 30 min of VR IBE. The VREP used an indoor bicycle (DP-652-G6, IWHASMP, China), and VR programs and applications.
An internet of things (IoT) sensor was attached to the pedal of an indoor bicycle, converting the indoor bicycle into a VR device. After downloading the VRFit application from Play Store or Apple Store on their smartphone, and upon logging in and connecting to the IoT sensor, the VR background and music were set on the app screen. When the bicycle pedal was turned, following mounting of the mobile phone on the HMD, the set virtual background and music were displayed. The exercise program was applied to the IBE group for a total of 50 min, including 10, 30, and 10 min of warm-up, main, and cool-down exercises, respectively. The control group did not participate in the exercise program, and was allowed to follow their normal daily routine for 2 weeks, without intervention (Figure S1).
## 2.4.1. Mean Blood Glucose (MBG)
In this study, the MBG was obtained by attaching a FreeStyle Libre CGM (Abbott Diabetes Care, Alameda, CA, USA) sensor to the upper arm of the participant, and continuously measuring the glucose level through the interstitial fluid.
## 2.4.2. Serum Fructosamine
For serum fructosamine testing, 3 mL of venous blood was collected, placed in a serum separating tube bottle, and sent to the Green Cross for analysis. The test was conducted by a colorimetric method, using Cobas 8000 (c702, Roche Diagnostics, Mannheim, Germany), which was intended to assess short-term average blood glucose level, using the normal range of 205–285 µmol/L as the standard.
## 2.4.3. Body Composition
Body mass index (BMI) refers to the value obtained by dividing the weight (kg) of the participants by the square of their height, measured using a body composition analyzer (InBody Dial h20b, Seoul, Korea). The muscle mass (kg) of the participants was measured using a body composition analyzer (InBody Dial h20b, Seoul, Korea).
## 2.4.4. Exercise Immersion
Exercise immersion was measured by a sports flow scale, which was developed by modifying the expansion of the Sport Commitment Model scale, developed by Scanlan et al. [ 27]. The scale comprises 12 items in two domains, of cognitive immersion and behavioral immersion, which were scored on a 5-point Likert scale, ranging from 1 point for “strongly disagree” to 5 points for “strongly agree.” The total score could be as high as 60 points; a higher score indicates a higher level of exercise immersion. The reliability of the scale is Cronbach’s alpha 0.86–0.94, while the reliability in this study was 0.90.
## 2.5. Data Analysis
The collected data were analyzed using the IBM SPSS software, version 26.0 (IBM Corp., Armonk, NY, USA). *The* general characteristics of the participants were analyzed by frequency, percentage, and average; the homogeneities of the general characteristics and dependent variables were analyzed by ANOVA and χ2-test. To verify the post-effects, ANOVA and RM ANOVA were used. The post hoc analysis was analyzed by Scheffé and least significant difference (LSD) tests. RM ANOVA was performed to test the difference in the effect according to the time change. When the sphericity test result did not satisfy the sphericity, Wilks’ lambda multivariate test was performed for analysis. Partial eta-squared (η2) between the groups and time was analyzed, to explain the degree of influence between the three groups.
## 2.6. Ethical Considerations
Before conducting this study, the research plan was approved by the Institutional Review Board of the University of Eulji (EU21-002). The research was conducted after registration with the Clinical Research Information Service (CRIS) (KCT0006654). The purpose of the study was fully explained to the participants selected for the experiment, before obtaining written consent for their voluntary participation. The possibility of participation and withdrawal from the experiment, premature abandonment, adverse effects, and treatment for such adverse effects, were described and explained in the informed consent form. It was explained to the participants that the collected data would be ID-coded according to the personal information guidelines, and utilized for approximately a year; thereafter, the data would be wiped out by shredding and permanent deletion from the database, after being stored for years. For the VRT group that participated in the study, a gift, exercise equipment, an M2Me IoT sensor, and an HMD were provided in return for participation in the exercise study. The IBE group was provided with gifts and exercise equipment, and the control group was provided with gifts and exercise equipment after completion of the data collection.
## 3.1. Homogeneity Test for the Participants’ General Characteristics and Previous Dependent Variables
A total of 41 participants were included in the study. The results of one-way ANOVA of the three groups, to verify the previous homogeneity of the general characteristics and dependent variables, are discussed in Table 1. The mean age was 52.93, 49.15, and 53.14 years in the VRT, IBE, and control groups, respectively, indicating no significant difference among the three groups. No significant difference was observed between the three groups based on the length of illness, sex, education level, smoking, and diabetes treatment before the experiment, confirming the homogeneity.
Based on the results of the one-way ANOVA for previous homogeneity of the dependent variables, the homogeneity of the three groups was confirmed, since no significant difference was observed in the MBG measured by CGM, serum fructosamine, BMI, muscle mass, and exercise immersion.
## 3.2. Effects of VREP on the Blood Glucose, Body Composition, and Exercise Immersion
At pre-test (W2), the MBG demonstrated no significant difference. At post-test (W4), MBG was 122.86 mg/dL, 123.54 mg/dL, and 132.43 mg/dL in the VRT, IBE, and control groups, respectively, indicating no significant difference. Upon analyzing this result using RM ANOVA, a significant difference was observed in the interaction between the group and measurement time ($F = 12.001$, $p \leq 0.001$), as shown in Table 2, and the partial η2, which was the effect of the VREP according to the group and time, was 0.387 (Figure 3a).
At baseline (W0) and pre-test (W2), serum fructosamine levels demonstrated no significant difference. At post-test (W4), the serum fructosamine level was 298.79 µmol/L, 307.69 µmol/L, and 311.43 µmol/L in the VRT, IBE, and control groups, respectively, indicating no significant difference. Upon analyzing this result using RM ANOVA, a significant difference was observed in the interaction between the group and measurement time ($F = 3.274$, $$p \leq 0.016$$), as shown in Table 2, and partial η2, which was the effect of the VREP according to the group and time, was 0.147 (Figure 3b).
At baseline (W0) and pre-test (W2), the BMI demonstrated no significant difference. At post-test (W4), the BMI was 24.94 kg/m2, 25.00 kg/m2, and 24.85 kg/m2 in the VRT, IBE, and control groups, respectively, indicating no significant difference (Figure 3c).
At baseline (W0) and pre-test (W2), the muscle mass demonstrated no significant difference. At post-test (W4), the muscle mass was 26.48 kg, 28.31 kg, and 24.87 kg in the VRT, IBE, and control groups, respectively, indicating no significant difference. Upon analyzing this result using RM ANOVA, a significant difference was observed in the interaction between the group and measurement time ($F = 4.445$, $$p \leq 0.003$$), as shown in Table 2, and the partial η², which was the effect of the VREP according to the group and time, was 0.190.
At baseline (W0) and pre-test (W2), the total exercise immersion score demonstrated no significant difference. At post-test (W4), the total exercise immersion score was 35.07, 30.00, and 26.14 points in the VRT, IBE, and control groups, respectively, indicating a significant difference among the three groups. Upon analyzing this result using RM ANOVA, a significant difference was observed in the interaction between the group and measurement time ($F = 4.418$, $$p \leq 0.004$$), as shown in Table 2, and the partial η2, which was the effect of the VREP according to the group and time, was 0.183 (Figure 3d).
## 4. Discussion
This study evaluated the after effects of implementing a 2-week VREP, on the blood glucose, body composition, and exercise immersion of the participants. Considering the characteristics of patients with diabetes, and the COVID-19 situation, an indoor bicycle incorporating the benefits of a combination of both aerobic and resistance exercises was selected for the exercise intervention method. Additionally, VR was used to increase interest and immersion in the exercise.
Following the 2-week long experimental intervention, CGM-measured MBG decreased by 15 mg/dL, 9.08 mg/dL, and 0.93 mg/dL in the VRT, IBE, and control groups, respectively. Serum fructosamine decreased by 14.71 µmol/L, 3.31 µmol/L, and 0.07 µmol/L in the VRT, IBE, and control groups, respectively, indicating a significant decrease in the CGM-MBG and serum fructosamine in the VRT group compared with those of the control group. Thus, the level of blood glucose could be decreased by exercising for only 2 weeks.
Compared to several previous studies evaluating the effect of various types of exercises, such as treadmill and stationary bicycle [28], walking [29], and compound exercises [30,31], as well as a study applying an 8-week long exercise program in patients with type 2 diabetes [32], VREP for 2 weeks appeared to be effective, since the CGM-MBG and serum fructosamine decreased by 15 mg/dL and 14.71 µmol/L, respectively.
*In* general, exercise therapy should be performed continuously, and most of the exercise studies in patients with diabetes have been conducted for more than 6 weeks. However, considering the significant decrease in the CGM-measured MBG and serum fructosamine, compound exercise, including aerobic and resistance exercise, for 2 weeks was effective in controlling the blood glucose levels in patients with type 2 diabetes. This study found that exercise, even for a short period of 2 weeks, had a positive effect on blood glucose control in patients with type 2 diabetes, which may motivate patients to start exercising, thereby serving as an attractive point for emphasizing the importance of exercise.
In the post hoc group analysis, a significant difference was observed in the CGM-MBG in the VRT and IBE groups compared to that of the control group, and in the serum fructosamine between the VRT and control groups. Since CGM-measured MBG is the mean of continuous blood glucose levels, and serum fructosamine is a value that reflects the blood glucose level for 2–3 weeks, VREP was more effective in reducing blood glucose than IBE. Therefore, it is necessary to compare the measurements before and after the experimental intervention for 3 weeks in a future study, to confirm the results of serum fructosamine.
Considering the effect of VREP on body composition, no significant difference was observed in BMI among the three groups, whereas muscle mass increased by 0.31 kg in the VRT group, 0.26 kg in the IBE group, and decreased by 0.22 kg in the control group. Since in previous studies, 8 weeks of aerobic exercise [33], 12 weeks of walking exercise [34], and 12 weeks of compound exercise [35] decreased the BMI by 1.53 kg/m2, 1.75 kg/m2, and 1.55 kg/m2, respectively, the 2-week intervention period of our study seems insufficient to induce a change in the BMI. A longer period of exercise is required for weight loss and BMI reduction.
Compared to the results of previous studies, conducting resistance exercise for 12 weeks [36], and compound exercise for 12 weeks [37], that reported an increase in muscle mass of 3.4 kg and 0.85 kg, respectively, the increase in muscle mass by 0.31 kg in the VRT group suggests that even 2 weeks of exercise could increase muscle mass. A previous study, using a Theraband resistance band [38], reported that the thickness of both the shoulder muscles started to increase after 2 weeks of exercise intervention; thus, muscle growth could occur just by exercising for 2 weeks. *In* general, an increase in muscle mass is crucial for healthy body composition; moreover, these results may motivate patients with type 2 diabetes to start exercising.
In the post hoc group analysis, a significant difference in muscle mass in the VRT and IBE group was observed, compared to that of the control group. It is necessary to reconfirm the experimental intervention in this study, by varying the experimental period, owing to the time-dependent effect.
In this study, owing to the 2-week experimental intervention, exercise immersion in the VRT group was significantly higher than that in the IBE and control groups. This study was conducted among patients with type 2 diabetes, and the symptoms of the participants were checked periodically before and after the experiment, in consideration of the possibility of cybersickness due to the VREP, using an HMD. The average age of the participants in our study was approximately 51.8 years. During the course of this study, a 54-year-old patient in the VRT group dropped out, owing to cyber-sickness, while the remaining participants in the VRT group completed the 2-week experimental intervention. Exercise immersion was much higher in the VRT group than in the IBE group. Therefore, VREP was effective in increasing exercise immersion.
To date, no studies applying an immersive VREP in patients with diabetes exist. However, VREP implemented in other diseases and age groups was effective in improving physical function and muscle strength, by allowing participants to immerse themselves in the fun of exercise [16,17,18,19,20,39].
The use of VR in certain situations reportedly improves calorie consumption and exercise speed, compared with exercise in daily life [40]. Based on the results of such studies, VR increased exercise immersion and improved exercise effectiveness. Further, VR exercise may have a more positive effect on the patients’ health than IBE alone, which would thereby aid in blood glucose control and health management.
Two weeks after the exercise intervention, the exercise immersion score increased by 7.57 points in the VRT group and 2.46 points in the IBE group, and decreased by 0.29 points in the control group. Based on the post hoc group analysis, a significant difference in exercise immersion was observed between the VRT and IBE group, since the VR exercise intervention was very effective in exercise immersion, and a new exercise program with VR stimulated the interest of the participants and induced motivation for exercise. These results suggest that VR, which provides experiences that cannot be experienced in reality, increases engagement and immersion through interaction with participants, thereby improving satisfaction and giving a sense of achievement [41]. It can contribute to increasing the exercise practice rate, by allowing users to continue the exercise by being immersed in it [42]. In a previous study, when VR was applied to a cycle exercise game, the VR group moved longer distances, with improved immersion, and demonstrated increased physical exercise ability [43]. A VR program using sensors designed to capture bodily movements, could reportedly induce users to actively participate in the experience, by allowing them to easily immerse themselves in the VR world.
The limitations of this study include the small number of participants, short period of application, inability to completely restrict dietary and physical activities, and the fact that continuous monitoring of blood glucose by the CGM device may have psychological effects on blood sugar control and exercise. In addition, one of the disadvantages of immersive VREP is that it can cause cybersickness, and in this study, this resulted in participants dropping out in the middle of the study. Therefore, this was a preliminary study, conducted over a short period of time. Based on the results of this study, it is necessary to conduct another study, that considers the number of subjects, diet, and intervention period, in order to ensure generalizability in the future.
## 5. Conclusions
A 2-week VREP application in patients with diabetes decreased their MBG, increased their muscle mass, and increased exercise immersion. Since the VREP is effective in increasing exercise immersion of the participants, and making them exercise, and even a short period of exercise is effective in reducing blood sugar, it could be a very effective exercise program for patients with diabetes, which could further motivate the participants. |
# A Head-to-Head Comparison of Two Algorithms for Adjusting Mealtime Insulin Doses Based on CGM Trend Arrows in Adult Patients with Type 1 Diabetes: Results from an Exploratory Study
## Abstract
Background: Continuous glucose monitoring (CGM) users are encouraged to consider trend arrows before injecting a meal bolus. We evaluated the efficacy and safety of two different algorithms for trend-informed bolus adjustments, the Diabetes Research in Children Network/Juvenile Diabetes Research Foundation (DirectNet/JDRF) and the Ziegler algorithm, in type 1 diabetes. Methods: We conducted a cross-over study of type 1 diabetes patients using Dexcom G6. Participants were randomly assigned to either the DirectNet/JDRF or the Ziegler algorithm for two weeks. After a 7-day wash-out period with no trend-informed bolus adjustments, they crossed to the alternative algorithm. Results: Twenty patients, with an average age of 36 ± 10 years, completed this study. Compared to the baseline and the DirectNet/JDRF algorithm, the Ziegler algorithm was associated with a significantly higher time in range (TIR) and lower time above range and mean glucose. A separate analysis of patients on CSII and MDI revealed that the Ziegler algorithm provides better glucose control and variability than DirectNet/JDRF in CSII-treated patients. The two algorithms were equally effective in increasing TIR in MDI-treated patients. No severe hypoglycemic or hyperglycemic episode occurred during the study. Conclusions: The Ziegler algorithm is safe and may provide better glucose control and variability than the DirectNet/JDRF over a two-week period, especially in patients treated with CSII.
## 1. Introduction
In the last two decades, continuous glucose monitoring (CGM) revolutionized the self-management of diabetes by providing users with near real-time information on their current glucose levels [1,2]. With the ever-increasing accuracy of sensors, more and more systems have been approved for non-adjunctive use by international regulatory agencies, in this way certifying that sensor glucose readings can be safely used for routine diabetes treatment decisions without confirmatory capillary blood testing [3,4].
One of the benefits of CGM is the prediction of future glucose levels with the so-called trend arrows, which indicate both the direction and the rate of change (ROC) of glucose at any given time. However, the meaning of the different arrows may vary depending on the manufacturer [5].
Interpreting trend arrows is a fundamental skill a patient should learn when using CGM. In clinical practice, people with diabetes are told to look at the trend arrows alongside current glucose values before physical activity, driving, bedtime, and before each meal to increase or reduce the calculated meal bolus [6,7]. The changes patients make in mealtime insulin dosage based on trend arrows are largely variable. As illustrated in the survey by Pettus et al., patients would adjust the mealtime dose by an average of $81\%$ and $46\%$ in cases of predicted higher or lower value, respectively [8].
Within the last decade, a number of algorithms have been proposed to determine appropriate dose adjustments based on the trend arrows. However, there is still no consensus due to the lack of robust clinical trials. In two remarkable studies on type 1 diabetes, the Juvenile Diabetes Research Foundation (JDRF) Continuous Glucose Monitoring study and the Diabetes Research in Children Network (DirecNet) Applied Treatment Algorithm study, the ‘$10\%$/$20\%$ rule’ for adjusting bolus insulin dose was evaluated for the first time [9,10]. The results of these studies indicated that the use of the ROC might improve the post-prandial glucose level and the quality of life. Later, other authors proposed different algorithms to encourage people with diabetes to handle CGM data daily. According to Scheiner [11] and Pettus and Edelman [12], a defined value ranging from 25 to 100 mg/dL should be added to or subtracted from the current glucose level based on the trend arrow, and a correction bolus should be calculated according to the patient’s insulin sensitivity factor (ISF). Klonoff and Kerr [13] introduced an easy-to-use formula to adjust the meal bolus dose by adding or subtracting the same amount of insulin for all patients, namely 1, 1.5, or 2 insulin units for the ROC of 1–2, 2–3, and >3 mg/dL/min, respectively. Laffel and Aleppo proposed different trend-informed adjustments of bolus doses depending on the individual ISF (<25 mg/dL, 25 to <50 mg/dL, 50 to <75 mg/dL, or >75 mg/dL), with differences between children [14] and adults [15]. Ziegler and colleagues suggested trend-informed bolus adjustments based on both the ISF (the same strategy proposed by Laffel and Aleppo) and pre-meal glucose levels (<70 mg/dL, 70–180 mg/dL, 180–250 mg/dL, or >250 mg/dL) [16]. In 2021, Bruttomesso et al. modified Ziegler’s slide rule by increasing the number of glucose ranges and insulin sensitivity classes [17].
Taking into consideration the role of the ROC as expressed by the trend arrow before calculating the meal bolus, we designed our research to evaluate the short-term efficacy and safety of two different algorithms for bolus adjustments, namely the earlier and simpler DirectNet/JDRF algorithm and the novel (at the time when this study was conducted) and more sophisticated Ziegler algorithm, in a sample of patients with type 1 diabetes using a CGM device.
## 2.1. Study Design
The current research study is an exploratory single-arm, cross-over study. It was approved by the local Ethical Committee, and performed at the diabetes care center of the University Hospital affiliate of the University Magna Graecia, Catanzaro, Italy. Consecutive patients with type 1 diabetes using the Dexcom G6 (Dexcom, Inc., San Diego, CA, USA) CGM system and regularly attending the center were assessed for eligibility. Those who met the inclusion/exclusion criteria (Table 1) were invited to join the study and were enrolled after signing a written informed consent form. Patients’ characteristics and ongoing treatment were retrieved from the electronic medical record.
The study consisted of two 2-week-long intervention phases to evaluate two different algorithms for adjusting the meal bolus based on trend arrows and a 7-day wash-out period with no trend-informed bolus adjustments between the two phases. Algorithm 1 was the simple DirectNet/JDRF, which suggests increasing or reducing the meal bolus, as was previously calculated according to the insulin:carbohydrates ratio (ICR) and ISF, by $10\%$ in the case of a 1–2 mg/dL/min rise or fall in sensor glucose levels and by $20\%$ in the case of a >2 mg/dL/min rise or fall in sensor glucose levels, respectively [10]. Algorithm 2 was the more sophisticated slide rule by Ziegler et al., which suggests changes in the meal bolus according to the trend arrow, pre-meal glucose level (<70 mg/dL; 70–180 mg/dL; 180–250 mg/dL; >250 mg/dL), and individual ISF (<25 mg/dL; 25–<50 mg/dL; 50–<75 mg/dL; >75 mg/dL). When the glucose is changing at a rate >3 mg/dL/min, insulin doses may vary by ±1–3.5 units; when the glucose is changing at a rate of 2–3 mg/dL/min, insulin doses may vary by ±0.5–2.5 units; when the glucose is changing at a rate of 1–2 mg/dL, insulin doses may vary by ±0.5–1.5 units [16].
In the two weeks before the study initiation, basal insulin doses, ICR, and ISF were optimized. The sequence of Algorithms 1 and 2 was randomly assigned to each participant. All patients received detailed instructions about the study’s protocol and a scorecard with the proposed adjustments. Patients on multiple daily injections (MDI) therapy were suggested to round the final dose to the lower unit for safety reasons. Participants were also invited not to change their lifestyle throughout the study period and to take three meals without snacks when possible. A follow-up phone call on day 3 of both phases was scheduled to ensure that participants were accurately following the study protocol.
## 2.2. Outcome Measures
The outcome measures were CGM-derived glucose metrics as recommended by international consensus [16], including time in the 70–180 mg/dL glucose range (TIR), time below range (TBR), time above range (TAR), mean sensor glucose, the standard deviation of mean glucose (SD), coefficient of variation of mean glucose (CV), and the glucose management indicator (GMI). All the glucose metrics mentioned above were downloaded from the Dexcom Clarity platform for healthcare professionals. During this study, all occurrences of severe hypoglycemia, defined as an event requiring the assistance of another person to actively administer carbohydrates, glucagon, or take other corrective actions and severe hyperglycemia, defined as a hyperglycemic event requiring hospitalization, were recorded.
## 2.3. Statistical Analyses
Statistical analyses were performed using SPSS vers.25.0 (IBM, Armonk, NY, USA). The normal distribution of variables was evaluated using the Shapiro–Wilk test. According to the study design, variables were collected and compared at baseline and after Algorithms 1 and 2 (study phases). Patients were analyzed both as a whole and divided according to the type of treatment (MDI or CSII). The ANOVA and the related-sample Friedman’s two-way ANOVA by ranks were used to compare variables collected at baseline and with Algorithms 1 and 2. The Bonferroni post hoc test and Wilcoxon signed rank test were used to compare the glucose metrics between study phases. The sample size was not driven and was arbitrarily chosen to collect adequate information on protocol efficacy and safety.
## 3. Results
Twenty patients with type 1 diabetes, aged 20–61 years, were recruited and completed this study. No severe hypoglycemic or hyperglycemic episodes occurred during the study. The characteristics of the patients enrolled in this study are illustrated in Table 2.
All patients were adherent and wore the sensor more than $70\%$ of the time during each study phase. Glucose metrics collected at baseline and after using the two algorithms are shown in Table 3. TIR, TAR, and mean glucose significantly differed at the three timepoints of the study, while no difference was detected in TBR, SD, CV, and GMI. The post hoc analysis revealed TIR to be significantly higher and TAR and mean glucose to be significantly lower after Algorithm 2 compared to baseline and Algorithm 1.
Three patients ($15\%$) had a TIR > $70\%$ at baseline, whereas five ($25\%$) and twelve ($60\%$) patients had a TIR > $70\%$ after two weeks of using Algorithms 1 and 2, respectively.
The mean insulin dose injected before meals was 15 ± 5 units with Algorithm 2 and 16 ± 6 units with Algorithm 1 ($$p \leq 0.08$$).
We then divided the participants according to the therapy regimen, MDI or CSII, and again compared the glucose metrics collected at baseline and the end of the two study phases. The results are displayed in Table 4 and Table 5.
In patients treated with MDI therapy, we found a statistically significant difference in TIR and TBR collected at baseline and after the two study phases. However, TBR was no longer significant ($$p \leq 0.076$$) when we excluded one patient with a TBR of $14\%$ at baseline, $7\%$ after Algorithm 1, and $4\%$ after Algorithm 2. Post hoc analyses revealed the TIR to be statistically higher with both algorithms than the baseline.
In patients treated with CSII, TIR, TAR, mean glucose, SD, CV, and GMI were statistically different across the three timepoints of this study. In the post hoc analysis, the same variables differed significantly between Algorithm 2 and the baseline, while Algorithm 1 differed in TIR and TAR when compared to the baseline. Algorithm 1 and Algorithm 2 differed in TIR, TAR, and SD.
## 4. Discussion
Glucose trend arrows add important information for appropriate mealtime insulin dosing in patients with intensive insulin-treated diabetes. However, there is still no consensus on adjusting the scheduled dose according to the upward or downward trend arrows available when using CGM.
In the absence of evidence, healthcare providers recommend increasing or decreasing the amount of insulin injected before a meal by following algorithms proposed by experts or according to self-reported patient experiences.
To our knowledge, this investigation is the first clinical trial evaluating the efficacy and safety of two different algorithms for adjusting mealtime insulin dose based on trend arrows. Among the algorithms available in the literature when the protocol was submitted to the Ethical Committee, we focused on the simple-to-use algorithm adopted in the DirectNet/JDRF study and the more sophisticated algorithm by Ziegler et al.
In our study, using a structured approach as on-top therapy for adjusting mealtime insulin doses based on trend arrows improved CGM-derived glucose control and variability measures, with the best results obtained by using the Ziegler algorithm in the subgroup of patients treated with CSII. We believe that flexible insulin administration with CSII, allowing for fractions of units to be delivered as a bolus, can magnify the fine-tuned dose adjustments with the Ziegler algorithm. Notably, the results were obtained without the occurrence of severe hypo- and hyperglycemia and without appreciable differences in the total daily bolus insulin dose possibly due to a more appropriate within-day distribution of mealtime doses. However, the DirectNet/JDRF algorithm can be regarded as a valid alternative for increasing TIR in patients on MDI therapy.
Several scientific societies recommend using trend arrows for bolus insulin adjustments for diabetes care [7,14,15,18,19]. Unfortunately, the existence of different algorithms and the lack of guidance for choosing between these methods overcomplicate the clinical scenario.
The major strength of our research is the cross-over study design, which eliminates the influence of environmental factors, eating behaviors, and activity level. Notably, the algorithm was added as on-top therapy; that is, patients had adequate glycemic control at baseline, and basal insulin, ICR, and ISF were all optimized before the study.
The current study has some limitations. Firstly, we only included adult patients with type 1 diabetes; therefore, the applicability of our findings to type 2 patients on intensive insulin treatment or to pediatric patients is unknown. However, Ziegler and colleagues propose dedicated tables for insulin-dependent type 2 diabetes and children/adolescents with type 1 diabetes, possibly resulting in more flexible insulin dosing and the better control of prandial glucose excursions in these groups of patients.
Secondly, we only evaluated the short-term safety and efficacy of the two algorithms. Further research is needed to clarify whether the use of the algorithms may be beneficial in the long term and help patients achieve their desired targets of HbA1c.
The interpretation of trend arrows undoubtedly added a layer of complexity when deciding how much insulin to administer before a meal. Any tool facilitating daily dose calculations may result in the more persistent use of dose adjustment algorithms and the long-term improvement of glucose control. In line with this thought, a visual scorecard reporting discrete amounts of insulin units to either add or subtract to the scheduled insulin bolus, such as the Ziegler et al. algorithm, may be more practical than other methods requiring complex calculations (percent dose increase/decrease or recalculation of meal doses based on predicted glucose levels). The development of CGM-informed bolus calculators (CIBC) with automatic trend-based dose adjustments is a further step towards simplifying daily self-management for patients on intensive insulin-based regimens. The feasibility of such an approach has been recently evaluated in a cohort of twenty-five patients with type 1 diabetes on CSII therapy participating in a two-phase, single-arm, prospective, multicenter study conducted in the U.S. At the end of this study, significantly fewer glucose readings of <70 mg/dL at four hours post bolus were found with the CIBC compared to standard bolus calculation without trend-based adjustment ($2.1\%$ ± $2.0\%$ vs. 2.8 ± 2.7, $$p \leq 0.03$$), while the percent of readings >180 and 70–180 mg/dL remained the same with no difference in insulin use or the number of boluses given between the two study phases [20]. However, none of the above-mentioned algorithms have been implemented in the automatic bolus calculators currently available on the market, either as a smart phone application or integrated into insulin pumps.
In recent years, the development of closed-loop systems providing glucose-responsive algorithm-driven insulin delivery revolutionized the treatment of type 1 diabetes, with ever-growing evidence highlighting their value in improving TIR, especially in the overnight period, without causing an increased risk of hypoglycemia [21,22,23,24,25,26,27]. Accordingly, international guidelines recommend that these systems be considered either for treating patients with suboptimal glycemia, significant glycemic variability, or impaired hypoglycemia awareness or to allow for permissive hyperglycemia due to the fear of hypoglycemia [2]. In patients treated with these new generation devices, avoiding insulin dose adjustments based on trend arrows is recommended, as algorithms are designed to automatically correct oscillations without external interference [18]. However, access to these new generation devices is still limited. Regional inequalities exist due to a lack of funding, underdeveloped health technology assessment bodies and guidelines, unfamiliarity with novel therapies, and inadequacies in healthcare system capacities [28]. Therefore, the thoughtful use of real-time glucose information, including insulin dose adjustments based on trend arrows, may help maximize glycemic outcomes in the greater proportion of patients using CGM devices [29,30].
## 5. Conclusions
The appropriate interpretation of trend arrows has the potential to maximize glycemic outcomes and improve engagement with diabetes self-management in patients with type 1 diabetes using CGM devices. We conducted a head-to-head comparison between two different algorithms for trend-informed bolus adjustments, and we have shown that the Ziegler algorithm is safe and provides better glucose control and variability than the DirectNet/JDRF, as measured by CGM over two weeks, especially in patients treated with CSII. Further research is needed to clarify whether these benefits are maintained in the long term and may also apply to type 2 patients on intensive insulin treatments or to pediatric patients. |
# A Single Bout of Remote Ischemic Preconditioning Suppresses Ischemia-Reperfusion Injury in Asian Obese Young Men
## Abstract
Remote ischemic preconditioning (RIPC) has been shown to minimize subsequent ischemia-reperfusion injury (IRI), whereas obesity has been suggested to attenuate the efficacy of RIPC in animal models. The primary objective of this study was to investigate the effect of a single bout of RIPC on the vascular and autonomic response after IRI in young obese men. A total of 16 healthy young men (8 obese and 8 normal weight) underwent two experimental trials: RIPC (three cycles of 5 min ischemia at 180 mmHg + 5 min reperfusion on the left thigh) and SHAM (the same RIPC cycles at resting diastolic pressure) following IRI (20 min ischemia at 180 mmHg + 20 min reperfusion on the right thigh). Heart rate variability (HRV), blood pressure (SBP/DBP), and cutaneous blood flow (CBF) were measured between baseline, post-RIPC/SHAM, and post-IRI. The results showed that RIPC significantly improved the LF/HF ratio ($$p \leq 0.027$$), SBP ($$p \leq 0.047$$), MAP ($$p \leq 0.049$$), CBF ($$p \leq 0.001$$), cutaneous vascular conductance ($$p \leq 0.003$$), vascular resistance ($$p \leq 0.001$$), and sympathetic reactivity (SBP: $$p \leq 0.039$$; MAP: $$p \leq 0.084$$) after IRI. However, obesity neither exaggerated the degree of IRI nor attenuated the conditioning effects on the measured outcomes. In conclusion, a single bout of RIPC is an effective means of suppressing subsequent IRI and obesity, at least in Asian young adult men, does not significantly attenuate the efficacy of RIPC.
## 1. Introduction
Remote ischemic preconditioning (RIPC), defined as a set of brief events of ischemia-reperfusion applied in distant tissues or organs from the heart, has been known to protect the cardiovascular systems from subsequent ischemic events in animal [1,2] and human [3] studies. Therefore, ischemic preconditioning has been recognized as one of the non-invasive interventions to prevent ischemic reperfusion injury (IRI) that inevitably occurs during the recovery process after ischemia [4]. Although the signaling mechanism of RIPC remains unclear, the protective effect of RIPC is known to be attributed to both humoral and neural pathways [5].
However, the effectiveness of an acute bout of RIPC is still controversial. For example, a single bout of RIPC was shown to attenuate myocardial ischemic stress through the modification of autonomic nervous system activity in an animal model [6]. In human studies, the RIPC intervention was reported to suppress sympathetic elevation and oxidative stress, together with an improved reactive hyperemic response after IRI in healthy humans [3,7] and attenuated myocardial tissue damage in patients with myocardial infarction [8]. However, others have reported that a single bout of RIPC alters neither autonomic function in young healthy individuals [9], nor cerebrovascular function in the elderly [10]. Further, the efficacy of RIPC was shown to be reduced in some clinical conditions such as type 2 diabetes mellitus [11]. Such inconsistent results, especially in humans, may be partly explained by different health profiles and/or cardiovascular disease risk factors among individuals [12].
Obesity is prevalent worldwide and a strong predictor of ischemic stroke [13] and myocardial infarction regardless of sex, age, and ethnicity [14]; it is also likely to be linked to ischemic diseases [15]. Considering these aspects, the protective effects of RIPC on obesity are worth investigating; however, only a few animal-based studies have been conducted, showing conflicting results. For example, RIPC was shown to offer protective effects on the ischemic livers of obese rats [16], whereas other studies found no meaningful RIPC-induced augmentation of myocardial functions in obese animal subjects [17,18]. Further, to the best of our knowledge, there is no previous study investigating the efficacy of RIPC in obese individuals; therefore, it is uncertain whether obesity influences RIPC outcomes in humans.
For the abovementioned reasons, the purpose of the present study was to investigate whether a single bout of RIPC preserves and/or mitigates vascular function and sympathetic reactivity after induced IRI and to compare the possible differences between obese and normal-weight individuals. We hypothesized that [1] a single bout of RIPC would reduce the degree of IRI and [2] obesity would reduce the degree of RIPC-induced preservation of vascular function after IRI.
## 2.1. Ethical Approval
The current study was conducted after the approval from the Institutional Review Board at Kyung-Hee University (KHGIRB-21-531) and conformed to the standard set by the Declaration of Helsinki. All participants provided written informed consent before their study participation.
## 2.2. Study Design
A total of sixteen male participants (8 normal weight, 8 obese) were recruited for the present study (Table 1). Because all participants in this study were Asian, obesity was defined based on Asia Pacific body mass index criteria equal to or greater than 25 kg/m2 [19]. All participants completed a medical screening questionnaire and those who reported a presence or history of cardiovascular or metabolic disease were excluded from the study.
## 2.3. Experimental Procedure
The schematic view of the study procedure is presented in Figure 1. This study used a cross-over, repeated measures experimental design to test if a single bout of RIPC modifies the degree of IRI in the obese population. All participants visited the laboratory three times at one-week intervals after being abstained from alcohol and caffeine consumption and strenuous physical activity at least 24 h before the scheduled visits. During the first visit, the participants underwent health screening, demographic measurements, and experimental familiarization.
During the remaining two visits, RIPC and SHAM were performed in a counterbalanced order. For the experimental trials, the participants attired medical scrubs upon their arrival at the laboratory and remained in the supine position on a bed for instrumentation. A contoured inflation cuff (18 × 108 cm) was placed on the left proximal thigh for the implementation of RIPC and another cuff (24 × 122.5 cm) was placed on the right proximal thigh to induce IRI. Electrocardiogram standard limb leads (SE-1515, Edan Instruments Inc, Shenzhen, China) were placed onto the torso to monitor heart rate variability (HRV). Cutaneous blood flow was measured (perfusion unit: PU), using laser Doppler flowmetry (LDF, VMS-LDF2, Moor Instruments Ltd., Devon, England) throughout the experiment. The Doppler probe was attached 2 cm from the medial side of the great saphenous vein in the middle of the right leg tibia.
After the completion of the instrumentation, participants rested quietly in a supine position for 20 min followed by the experimental measurements, which were carried out three times throughout the test (e.g., Baseline, Post RIPC/SHAM, and Post IRI). The measurement started with cutaneous blood flow that was continuously monitored throughout the protocol but analyzed at a specific time point followed by HRV measurement for 5 min. Subsequently, blood pressure and pulse rate were measured using a digital sphygmomanometer (BP742N, OMRON Corporation, Kyoto, JAPAN) together with the cold pressor test (CPT) to determine sympathetic reactivity. CPT was carried out by the immersion of participants’ right hand in cold water (at 4–5 °C) for 3 min during which blood pressure and pulse rate were measured at the end of the first and third minutes.
Following the completion of the baseline measurement, RIPC or SHAM was carried out using a rapid cuff inflation system (E20, Hokanson Inc., Bellevue, WA, USA). The RIPC protocol consisted of 3 cycles of ischemia at 180 mmHg for 5 min and reperfusion for 5 min on the left leg. SHAM was performed in the same manner as RIPC; however, the compression intensity was set at each participant’s diastolic blood pressure measured at baseline. Post-RIPC measurement was started immediately after RIPC application followed by induction of the IRI on the right leg for 20 min of ischemia at 180 mmHg and 20 min of reperfusion. We judged that the ischemic stress threshold was reached when the cutaneous blood flow value fell below $20\%$ compared with the baseline [20]. Finally, as soon as the reperfusion period was over, post-IRI measurements were taken in the order described above.
## 2.4. Calculation
Power spectral analysis of HRV (1600 Hz sampling frequency) was conducted using the fast Fourier transform and expressed as the ratio of the low frequency (0.04–0.15 Hz) to high frequency (0.15–0.4 Hz) (LF/HF ratio) to determine an overall balance between the sympathetic and parasympathetic activities. In addition, the lower limb cutaneous vascular conductance (CVC) and cutaneous vascular resistance (CVR) were calculated based on cutaneous blood flow and mean arterial pressure as shown below.
## 2.5. Statistical Analyses
All data in this study were analyzed using SPSS (Ver. 26, IBM, Somers, NY, USA) and presented as mean and standard deviation. A two-way repeated measures ANOVA (2 conditions × 3 time points) with obesity as a between-subject factor was used to compare dependent variables between RIPC and SHAM. When a significant F-value was detected with Greenhouse–Geisser correction for sphericity, a post hoc pairwise comparison with Bonferroni correction was carried out to compare conditions at each time point. The significance level of all statistical analyses was set at α = 0.05.
## 3.1. Heart Rate Variability, Blood Pressure, and Resting Heart Rate
There was a significant interaction for the LF/HF after IRI in the RIPC compared with the SHAM ($F = 4.631$, $$p \leq 0.027$$) (Table 2). Consistently, SBP ($F = 3.423$, $$p \leq 0.047$$) and MAP ($F = 3.488$, $$p \leq 0.049$$) after IRI were significantly lower in RIPC compared with SHAM, but no difference was found for DBP ($F = 1.698$, $$p \leq 0.206$$) and HR ($F = 0.589$, $$p \leq 0.550$$) (Figure 2). However, the existence of obesity did not alter the effect of RIPC on these variables (LF/HF ratio: $F = 0.050$, $$p \leq 0.939$$; HR: $F = 1.260$, $$p \leq 0.301$$; SBP: $F = 0.634$, $$p \leq 0.537$$; MAP: $F = 0.572$, $$p \leq 0.564$$; DBP: $F = 0.572$, $$p \leq 0.564$$).
## 3.2. Sympathetic Reactivity
A significant interaction was found for sympathetic reactivity, where there was an attenuated SBP response to cold in RIPC compared with SHAM after IRI ($F = 5.382$, $$p \leq 0.039$$); however, no significant difference was found in MAP ($F = 3.549$, $$p \leq 0.084$$) or DBP ($F = 1.546$, $$p \leq 0.238$$). Similarly, the existence of obesity did not alter the sympathetic reactivity (SBP: $F = 0.726$, $$p \leq 0.411$$; MAP: $F = 0.012$, $$p \leq 0.913$$; DBP: $F = 0.062$, $$p \leq 0.808$$).
## 3.3. Cutaneous Vascular Responses
A significant interaction was found for CBF, where there was an alleviated reduction in CBF after IRI in the RIPC compared with SHAM ($F = 10.111$, $$p \leq 0.001$$). Consequently, a significantly increased CVC and decreased CVR were found after IRI in the RIPC (CVC: $F = 7.828$, $$p \leq 0.003$$; CVR: $F = 10.576$, $$p \leq 0.001$$). However, RHI did not differ between conditions ($F = 0.716$, $$p \leq 0.474$$) (Figure 2), and the existence of obesity did not influence any of the vascular variables (LDF: $F = 0.231$, $$p \leq 0.784$$; CVC: $F = 0.185$, $$p \leq 0.819$$; CVR: $F = 0.032$, $$p \leq 0.962$$; RHI: $F = 0.973$, $$p \leq 0.381$$).
## 4. Discussion
To our knowledge, this is the first study investigating the effectiveness of RIPC in obese humans. It was found that a single bout of RIPC significantly suppressed the IRI-induced aggravation of vascular function and sympathetic reactivity compared with SHAM. However, there was no difference in any outcome measures between obese and normal-weight individuals. These results suggest that a single bout of RIPC is an effective means of mitigating injuries resulting from a subsequent ischemic event and such modifications were neither blunted nor magnified by obesity, at least in young adult males, in the present study.
The present results showed the significant inhibitory effects of RIPC on the reduction in CBF; in agreement with previous findings, these effects mitigated the IRI-related impairment in CVC and CVR. Kraemer et al. [ 2011] reported that a single bout of RIPC significantly increased blood flow and tissue oxygen saturation during the reperfusion phase in healthy young men [21]. Moreover, Kharbanda et al. [ 2002] showed that blood flow in response to acetylcholine after IRI alone was decreased, whereas a single bout of RIPC suppressed this reduction in healthy humans [1]. These acute preconditioning effects on vascular function, such as reduced coronary resistance and increased cerebral blood flow, have also been reported in animal studies [22,23,24]. Moreover, significantly suppressed elevation in SBP and MAP in RIPC (Figure 2), which might be explained by the augmented cutaneous vasodilation, supports previous findings [25] and implies therapeutic potential for blood pressure management [26].
When considering IRI-induced aggravation in vascular function is attributed to a decline in nitric oxide bioavailability and sympathetic-overactivation-induced vasoconstriction [27,28], a single bout of RIPC in the present study is thought to alter such impairments either singly or in combination. Although previous studies demonstrated RIPC-induced vasodilation originates from both endothelial-dependent and -independent vasodilators [29,30], the ability to explain which of the two vasodilation mechanisms was responsible for improved vascular function in the present experiment is limited. On the other hand, the suppressed vascular impairment in RIPC was accompanied by a significant attenuation in the LF/HF ratio (Table 2) and sympathetic reactivity to a cold stimulation after IRI (Figure 2), similar to the previous results of RIPC-induced improvement in sympathovagal balance in healthy humans [31] and patients with angina pectoris [25].
Both obesity and IRI are hypoxic and share similar inflammatory profiles including excessive production of reactive oxygen species and inflammatory cytokines [32,33]. Therefore, due to increased susceptibility to ischemic injury, we expected a greater degree of IRI together with reduced RIPC-induced preservation of vascular function in obese individuals compared with the normal-weight participants. Contrary to our expectations and previous results from the animal model [17,18], the present results showed the positive effects of RIPC on vascular and autonomic functions in obese participants after IRI, although the obese individuals showed a lesser degree of CBF recovery and maintenance over time compared with the normal-weight individuals in the first 2 min of reperfusion (Figure 3).
This might be due to the characteristics of the obese subjects participating in this study. Activation of phosphatase, known to limit the efficacy of both preconditioning and postconditioning with aging, was more pronounced in obese rats [34,35], but we recruited young healthy subjects without a history of cardiovascular and metabolic diseases. Regardless of the degree of BMI, the longer the period of obesity, the higher the risk of cardiovascular disease [36,37]. Their short obesity period and healthy physical condition most likely offset the adverse effects of obesity, such as cardiovascular disease and impaired physical function.
This study has several limitations. First, the present results and interpretation regarding obesity are limited to Asian men. Secondly, we also excluded female participants to rule out the effect of hormonal changes on measurements such as HRV. Finally, any blood parameters that may have been responsible for explaining the mechanisms and/or effects of RIPC on the outcomes were not included. Therefore, the potential roles of some important markers such as inflammatory cytokines and various vasodilators were limited in the present interpretation.
## 5. Conclusions
A single bout of RIPC was found to be an effective means of reducing impairment in vascular function and hyper-sympathetic nerve activity resulting from an acute ischemic event. Further, based on the present outcome measures, obesity neither significantly aggravated the degree of IRI nor abolished the favorable effects of RIPC in Asian obese young men. Future studies are warranted to investigate how repeated bouts of RIPC could further influence the functioning of the vascular systems in diverse obese populations. |
# Risk Factors of Microalbuminuria among Patients with Type 2 Diabetes Mellitus in Korea: A Cross-Sectional Study Based on 2019–2020 Korea National Health and Nutrition Examination Survey Data
## Abstract
Diabetes mellitus is a chronic disease with high economic and social burdens. This study aimed to determine the risk factors of microalbuminuria among patients with type 2 diabetes mellitus. Microalbuminuria is predictive of early-stage renal complications and subsequent progression to renal dysfunction. We collected data on type 2 diabetes patients who participated in the 2019–2020 Korea National Health and Nutrition Examination Survey. The risk factors for microalbuminuria among patients with type 2 diabetes were analyzed using logistic regression. As a result, the odds ratios were 1.036 ($95\%$ confidence interval (CI) = 1.019–1.053, $p \leq 0.001$) for systolic blood pressure, 0.966 ($95\%$ CI = 0.941–0.989, $$p \leq 0.007$$) for high-density lipoprotein cholesterol level, 1.008 ($95\%$ CI = 1.002–1.014, $$p \leq 0.015$$) for fasting blood sugar level, and 0.855 ($95\%$ CI = 0.729–0.998, $$p \leq 0.043$$) for hemoglobin level. A significant strength of this study is the identification of low hemoglobin level (i.e., anemia) as a risk factor for microalbuminuria in patients with type 2 diabetes. This finding implies that the early detection and management of microalbuminuria can prevent the development of diabetic nephropathy.
## 1. Introduction
Microalbuminuria is defined as the persistent elevation of albumin excretion (30–300 mg/day) in urine. This range is higher than that of normoalbuminuria (<30 mg/day) but lower than that of albuminuria (<300 mg/day) [1]. Microalbuminuria is a known early predictor of kidney and cardiovascular diseases as well as diabetes and hypertension [2,3,4,5], and an increased albumin concentration in the urine is a result of kidney disease [6]. The kidney is mainly comprised of microvessels, and diabetic nephropathy occurs due to defects in the glomerular filtration barrier caused by damage in the renal microvasculature of the glomerulus; the lack of moderate regulation of blood pressure or blood glucose levels in patients with hypertension or diabetes, respectively, could accelerate this condition [7]. Diabetic nephropathy is initially characterized by a drop in the glomerular filtration rate, followed by a period of microalbuminuria. In a considerable number of diabetic patients, the proportion of those with microalbuminuria increases by $20\%$ each year; then, they are eventually diagnosed with diabetic nephropathy when albuminuria is detected [8,9]. Approximately $20\%$~$40\%$ of diabetic patients develop diabetic nephropathy; it is a major complication of diabetes that reduces the patient’s quality of life. Hence, the early diagnosis of diabetic nephropathy is critical to enable active treatment [10,11].
The prevalence of diabetes mellitus continues to increase due to advances in medical technology, longer life expectancy, and changes in diet and lifestyle. As a result, the rate of diabetic nephropathy has steadily increased and it is currently the most common cause of end-stage renal failure worldwide [12,13]. Microalbuminuria is the most important indication of diabetes-related kidney complications, as it appears in the early phase and predicts the progression of complications [2,3,4]. Moreover, the risk of cardiovascular complications increases in patients with diabetes with microalbuminuria; therefore, early screening and prevention are important [5]. Hence, The American Diabetes Association has recommended that a microalbuminuria test be performed following diagnosis and annually thereafter [14].
Abnormal glucose homeostasis in diabetic patients results in various complications as it is accompanied by hyperglycemia caused by a lack of insulin secretion, hypertension, and metabolic disorders [15,16]. Diabetes can induce macrovascular complications that affect the brain, heart, and peripheral vessels and microvascular complications that damage the eyes, kidneys, and nerves [17,18]. Diabetic nephropathy is the most common diabetes-related complication of the microvasculature—it accounts for 30–$40\%$ of chronic kidney disease (CKD) cases as the main cause and $45\%$ of end-stage renal disease (ESRD) cases [19]. The prevalence of ESRD in diabetic patients is 10 times higher than that in non-diabetic patients [20]; over the past decade, the incidence of ESRD has rapidly increased as a result of the high incidence of diabetes [20]. Diabetic nephropathy is a severe complication detected in patients with diabetes; it is associated with mortality and increased risks of cardiovascular diseases and ESRD. Hence, renal replacement therapy such as dialysis or transplantation is required [21]. This leads to social burdens and enormous economic costs. Therefore, the risk factors of microalbuminuria should be identified at the early stages of diabetes to prevent the occurrence of complications [22,23].
Related studies have reported that microalbuminuria is an integrated index for renal and cardiovascular risk reduction in patients with type 2 diabetes (T2DM) [24], and that hypertension in patients with diabetes accelerates the onset of diabetic nephropathy in the presence of microalbuminuria [25,26]. Microalbuminuria in patients with diabetes is associated with, among others, diabetes duration, blood pressure, fasting blood sugar (FBS) level, glycosylated hemoglobin (HbA1c) level, serum insulin concentration, dyslipidemia, smoking, and body mass index (BMI) [27,28,29,30,31,32].
However, most studies among patients with diabetes have been conducted in a clinical setting. This study aims to discover appropriate management measures to decelerate the occurrence of complications in consideration of the general characteristics of community-dwelling patients with diabetes. Therefore, we conducted this study to reduce the socioeconomic burden caused by diabetes complications and contribute to public health by identifying the risk factors related to microalbuminuria in patients with T2DM living in the community.
## 2.1. Data Collection and Study Population
The study included adults aged ≥30 years who participated in the eighth Korea National Health and Nutrition Examination Survey (KNHANES) in 2019–2020 [33]. As a nationwide cross-sectional study conducted by the Korea Centers for Disease Control and Prevention based on Article 16 of the National Health Promotion Act, the KNHANES provides reliable statistics that can be used to assess the health and nutritional status of the Korean population [34]. The KNHANES data are useful in the development of health policies that reflect the current health status of people in South Korea. In accordance with the Korean Bioethics and Safety Act, the KNHANES is a government-run research project for public welfare and has been conducted with Institutional Review Board exemption since 2015. The requirement for informed consent was also waived.
Among the participants of the 2019–2020 KNHANES, 11,093 adults aged ≥30 years were selected for this study; among them, 1737 patients diagnosed with T2DM by a physician, receiving hypoglycemic drugs, or with an FBS level of ≥126 mg/dL or HbA1c level of ≥$6.5\%$ were selected. In contrast, patients diagnosed with T1DM (including those receiving insulin injection monotherapy), renal disease (CKD or ESRD), cardiovascular disease, or hypertension prior to the diagnosis of diabetes, which could thus affect the level of microalbuminuria, were excluded. Overall, only 539 patients were included in the subsequent analyses.
## 2.2. Assessment of Microalbuminuria Using the ACR Index
In microalbuminuria, the amount of albumin excreted in the urine is 30–300 mg (or 30–300 μg/mg creatinine) per 24 h [1]. Microalbuminuria cannot be detected using the standard urine dipstick test. Microalbuminuria was determined based on the levels of albumin and creatinine in urine using a turbidimetric assay. Therefore, the albumin-to-creatinine ratio (ACR), which is estimated using a random urine sample, is used to screen for microalbuminuria. In this study, a spot urine sample was used to estimate the ACR.
## 2.3. Anthropometric and Biochemical Data
For the blood pressure data, the final systolic blood pressure (SBP) and diastolic blood pressure (DBP) measurements were used. The blood pressure was measured using a manometer (Baumanometer Wall Unit 33, Baum, Methuen, MA, USA) with the arm elevated above the level of the heart after a 5 min rest. To obtain the anthropometric data to estimate the BMI, standard devices and measurement methods were used; the height and weight were measured to the nearest 0.1 cm and 0.1 kg, respectively, using portable measurement equipment (Seca 225, Seca Deutschland, Hamburg, Germany; GL-6000-20, G-Tech, Uijeongbu, Republic of Korea). Obesity was determined based on the BMI, which was calculated by dividing the weight in kilograms by height in meter squared. The BMI was categorized based on the World Health *Organization criteria* for the Asia-Pacific region. A BMI of <18.5 kg/m2 is categorized as underweight, a BMI of 18.5≥–<23 kg/m2 is categorized as normal, a BMI of ≥23 kg/m2 but <25 kg/m2 is categorized as overweight, and a BMI of ≥25 kg/m2 is categorized as obese. The waist circumference (WC) cutoff points for Korean individuals were determined according to the criteria suggested by the Korean Society for the Study of Obesity: 90 cm for men and 85 cm for women.
For blood testing, blood samples were collected in the morning after fasting for at least 8 h. The Hitachi Automatic Analyzer 7600-210 (Hitachi, Japan) was used to obtain the measurements. The following biochemical data were collected: FBS, HbA1c, hemoglobin (Hb), serum lipid (triglycerides (TG), total cholesterol (TC), high-density lipoprotein cholesterol (HDL-C), and low-density lipoprotein cholesterol (LDL-C)), serum albumin, and creatinine levels.
## 2.4. Demographic Characteristics
The obtained demographic characteristics included age (≤50, 50–59, 60–69, and ≥70 years), marital status (single, married, separated, or divorced), and economic status (low, middle–low, middle–high, or high) which was also divided into quartiles based on the average monthly household income. The household income was divided by the square root of the number of household members, which is the standard method recommended by the Organization for Economic Cooperation and Development. In terms of smoking status, the patients were categorized as never, ex-, and current smokers.
## 2.5. Statistical Analysis
The collected data were statistically analyzed using R software version 4.1.1 (R Foundation, Vienna, Austria) and according to the guidelines of the 2019–2020 KNHANES for complex sample design. All values in the sample data are expressed as mean and standard error, and the level of significance was set at a p-value of <0.05. *The* general characteristics of the study population were compared using Pearson’s chi-square test for categorical variables and the t-test for continuous variables according to the microalbuminuria status. Logistic regression analysis was performed to assess the risk factors of microalbuminuria in the study population. Using a logistic regression model incorporating the significant independent variables based on the chi-square test and t-test results, the odds ratio (OR) and $95\%$ confidence interval (CI) were obtained through the exponentiation of the estimated regression coefficients. In the logistic regression analysis, the model fitness and significance of each variable were tested to determine the significant variables.
## 3. Results
Among the 539 patients with T2DM in this study, $17.63\%$ had microalbuminuria, $3.71\%$ had albuminuria, and $78.66\%$ had normoalbuminuria. The patients’ mean ages were 64.68 years in the microalbuminuria group and 63.76 years in the normoalbuminuria group. The incidence of microalbuminuria was the highest in individuals aged ≥70 years and in the low-income group. The groups with <10 years and >20 years of diabetes duration had the highest proportions of patients. The microalbuminuria and normoalbuminuria groups did not vary significantly in terms of age, sex, economic status, BMI, WC, or smoking status, although they varied significantly in the duration of diabetes and medication for hypertension (Table 1).
Regarding the health-related characteristics, the mean ACR values were 9.94 mg/g creatinine in the normoalbuminuria group and 96.25 mg/g creatinine in the microalbuminuria group. The between-group variation based on microalbuminuria status was significant for HbA1c, FBS, Hb, TG, HDL-C, LDL-C, serum blood urea nitrogen (BUN), serum creatinine levels, and WC, but not for BMI, TC level, or DBP (Table 2).
When logistic regression was performed to identify significant variables, factors associated with the determination of microalbuminuria were unnecessarily included. To compensate, we excluded diabetes duration, BUN, and creatinine from the significant variables. This improved the fit of the model and the significance of each variable.
Regarding the logistic regression analysis, the ORs were 1.036 ($95\%$ CI = 1.019–1.053, $p \leq 0.001$) for SBP and 0.966 ($95\%$ CI = 0.941–0.989, $$p \leq 0.007$$), 1.008 ($95\%$ CI = 1.002–1.014, $$p \leq 0.015$$), and 0.855 ($95\%$ CI = 0.729–0.998, $$p \leq 0.043$$) for HDL-C, FBS, and Hb levels, respectively (Table 3). For the independent variables in the final model, the OR and $95\%$ CI were visualized in a decreasing order; the results are shown in Figure 1.
## 4. Discussion
Based on the results of this study, the significant risk factors of microalbuminuria in patients with T2DM were SBP and HDL-C, FBS, and Hb levels, in a decreasing order, with the level of significance set at <0.05. The discussion is as follows.
First, as the SBP level increased, the risk of microalbuminuria increased 1.036-fold. Microalbuminuria is an important risk factor that predicts the occurrence of diabetic nephropathy and CKD [2,3,4] and a risk factor of cardiovascular disease associated with hypertension [5]. According to a previous study, the risk of diabetic nephropathy in patients with hypertension and T2DM could be reduced via strict regulation of blood pressure levels [35]; the incidence of microalbuminuria was higher in patients with T2DM with hypertension compared with that in patients with T2DM alone. In the Kidney Disease Improving Global Outcomes guideline [1], the recommended target blood pressure levels to suppress the development of nephropathy and reduce cardiovascular-disease-related mortality are <$\frac{140}{90}$ mmHg in those with an excretion of <30 mg/g creatinine in the urine and <$\frac{130}{80}$ mmHg in those with an excretion of ≥30 mg/g creatinine in the urine or a high risk of cardiovascular disease [36]. Thus, for patients with T2DM, blood pressure regulation combined with periodic microalbuminuria tests could be a preventive measure against the development of diabetic nephropathy as well as microalbuminuria.
Second, as the HDL-C level increased, the risk of microalbuminuria decreased 0.996-fold. This finding agrees with the results of a previous study [22] which reported the correlations of microalbuminuria with hypertension, hyperglycemia, a low HDL-C level, and a high TG level, and with the study conducted by Sun et al. [ 37], which reported a decrease in microalbuminuria caused by a high HDL-C level. A typical patient with T2DM exhibits dyslipidemia with a characteristically low HDL-C level; meanwhile, HDL-C plays a role in the reverse transport of cholesterol, anti-inflammation, and anti-oxidation [38]. The process of reverse cholesterol transport is inhibited by the lack of HDL-C or a related dysfunction, while glomerulosclerosis and tubulointerstitial injury are induced [36]. The reduced anti-oxidation capacity of HDL-C also increases systemic oxidative stress and oxidized LDL levels in the circulation [39]; as the low HDL-C level decreases the glucose absorption in the skeletal muscles and induces the dysfunction of pancreatic β cells, the resulting hyperglycemia and metabolic disorder can damage the glomerular endothelial and tubulointerstitial cells [40]. Through these mechanisms, a low HDL-C level promotes microalbuminuria, hyperglycemia, and diabetic nephropathy [41]. In addition, dyslipidemia is closely associated with an increased risk of diabetes due to the changes in dietary habits and lifestyle; these changes have been correlated with being overweight, insufficient physical activity, smoking, hypertension, and cholesterol levels [39]. For patients with T2DM, increasing the HDL-C level could be a preventive measure against diabetic nephropathy and microalbuminuria.
Third, as the FBS level increased, the risk of microalbuminuria increased 1.008-fold. This finding agrees with the results of a previous study which reported that the risk factors for microalbuminuria were FBS level, blood pressure, old age, TG level, and duration of diabetes [22,23]. Hyperglycemia is a risk factor of the complications of diabetes that play a key role in the onset and development of diabetic nephropathy; hence, patients with T2DM who required strict regulation of blood sugar levels had a significant association with microalbuminuria [42]. Another study supported this association by reporting a correlation between increased excretion of microalbumin and increased levels of blood glucose and insulin [28]. In T2DM, the strict regulation of blood glucose in the early stages and before the onset of diabetic nephropathy prevents the development of diabetic nephropathy [1,14]. Other studies reported that HbA1c, FBS, SBP, and blood lipid levels were risk factors of microalbuminuria [28,29]. In our study, HbA1c was not a significant risk factor for microalbuminuria, but FBS was significant, which means that stricter daily FBS confirmation is required for the long-term glycemic management of HbA1c. The current study showed that the microalbuminuria group had higher FBS levels compared with the normoalbuminuria group (155 vs. 134 mg/dL). This is thought to reflect the importance of FBS in daily glycemic management as a risk factor for microalbuminuria, which implies that hyperglycemia or the inadequate regulation of blood glucose level could cause diabetic nephropathy with microalbuminuria.
As such, the ultimate goals of T2DM treatment are to prevent potential diabetic complications and maintain a healthy life through the regulation of blood glucose, blood pressure, and blood cholesterol levels [43]. To achieve these goals, palliative therapy such as blood glucose and blood pressure control, treatment of dyslipidemia, lifestyle modification, and regulation of dietary sodium intake are critical. Hence, the early detection of microalbuminuria and public health education and management are necessary to prevent diabetic nephropathy. Thus, a mobile health program for the systematic management of patients with T2DM should be developed; moreover, e-learning education and notifications regarding the probability of developing microalbuminuria and related complications should be provided as part of an integrated healthcare service.
Finally, as the Hb level increased, the risk of microalbuminuria decreased 0.885-fold. These results show that the hemoglobin level was associated with low HDL-C and high FBS and SBP levels which is considered to be significant due to vascular damage. According to a previous study into diabetes, a lower hemoglobin level was a risk factor of the progression of diabetic nephropathy [44], and it was increased in the prevalence of anemia in microalbuminuria compared to normoalbuminuria [45]. Anemia was a risk factor for albuminuria and kidney damage in patients with T2DM [46]. Furthermore, albuminuria is a risk factor for anemia in the CKD [47]; this is mainly due to reduced erythropoietin formation in the kidneys. Renal anemia appears early in the CKD process and worsens as it progresses. Given that the signs and symptoms of anemia in diabetes depend on the period in which Hb reduction is advanced and begin slowly, anemia associated with CKD is often asymptomatic and is only detected via routine blood tests. Other studies have shown that Hb, albuminuria, and kidney function are also strongly associated with cardiovascular risk [48]. This is important because delayed diagnosis and treatment of anemia associated with kidney disease may increase the risk of cardiovascular complications. Therefore, when microalbuminuria occurs in patients with T2DM, follow-up with anemia testing may help to prevent diabetes complications.
In addition, we found that the incidence of microalbuminuria was highest in patients aged ≥70 years and was higher in patients with a duration of diabetes of <10 years or >20 years compared with that in patients with a diabetes duration of 10–20 years. These results indicate that patients with a 10–20 year duration of diabetes had a greater interest in undergoing diabetes management as their diabetes had already progressed. In patients with a <10 year duration of diabetes, periodic tests could be inadequate or their diabetic management could have been neglected, thus suggesting a need for the intensive prevention and management of diabetic complications. In particular, it is worth noting that the incidence of microalbuminuria was the highest in patients with T2DM with low income, which coincided with a study on the differences in the incidence of microalbuminuria according to socioeconomic status [49,50]. This finding suggests the need for active public health strategies and support regarding the early detection of microalbuminuria and the prevention of diabetic nephropathy in patients with T2DM with low income.
The strength of this study is that the sample population was selected from the participants of KNHANES, an extensive national survey, which supports the possibility of generalizing the results of the study. Meanwhile, this study has several limitations. The cause–effect relationships across factors involved in microalbuminuria could not be determined as the study was cross-sectional in nature. Second, the classification of microalbuminuria could have been inaccurate as the microalbuminuria test was performed only once. Third, the effects of drugs on diabetes, hypertension, and hyperlipidemia could not be taken into account. Lastly, transient false-positive results were possibly obtained due to the performance of intense physical exercise, the occurrence of fever, or the development of urinary tract infection.
## 5. Conclusions
The risk factors of microalbuminuria in patients with T2DM were SBP and HDL-C, FBS, and Hb levels, the most important finding being that the Hb level was identified as a risk factor. Nonetheless, verification is required through subsequent studies on the association between microalbuminuria and anemia. In addition, the need for public health strategies that consider age and income level in community-dwelling patients with T2DM was confirmed.
Based on the results of this study, the authors would like to make the following suggestions. First, efforts should be made to reduce the inequalities in healthcare for patients with T2DM and low socioeconomic status and the use of healthcare services for the aged population. Second, further studies should develop a digital health program to reduce the risk factors of microalbuminuria in patients with T2DM. |
# Importance of Communication Skills Training and Meaning Centered Psychotherapy Concepts among Patients and Caregivers Coping with Advanced Cancer
## Abstract
Latinos are more likely to be diagnosed with advanced cancer and have specific existential and communication needs. Concepts within Meaning-Centered Psychotherapy (MCP) interventions and Communications Skills Training (CST) assist patients in attending to these needs. However, Latino-tailored MCP interventions have yet to be adapted for advanced cancer patients and caregivers. A cross-sectional survey was administered to Latino advanced cancer patients and caregivers where participants rated the importance of the goals and concepts of MCP and CST. Fifty-seven ($$n = 57$$) Latino advanced cancer patients and fifty-seven ($$n = 57$$) caregivers completed the survey. Most participants rated MCP concepts as extremely important, ranging from $73.75\%$ to $95.5\%$. Additionally, $86.8\%$ favored finding meaning in their life after a cancer diagnosis. Participants ($80.7\%$) also selected the concept of finding and maintaining hope to cope with their cancer diagnosis. Finally, participants found CST concepts and skills acceptable, ranging from $81.6\%$ to $91.2\%$. Results indicate the acceptability of Meaning-Centered Therapy and Communication Skills Training among Latino advanced cancer patients and caregivers coping with advanced cancer. These results will inform the topics to be discussed in a culturally adapted psychosocial intervention for advanced cancer patients and their informal caregivers.
## 1. Introduction
Foreign-born Latinos, from countries such as Cuba, Puerto Rico, Mexico, and Central and South America, are more likely to be diagnosed with cancer at an advanced stage when compared to non-Latino whites [1,2,3]. In addition, foreign-born Latinos have specific existential [4] and communication needs [5,6]. An advanced cancer diagnosis can cause physical, emotional, psychosocial, and existential stress not only for the patient but also for the caregiver [7,8,9,10]. Cancer as a significant stressor has been treated with several psychotherapeutic interventions designed to address this existential suffering and communication need. Specifically, Meaning-Centered Psychotherapy (MCP) and Communication Skills Training (CST) have shown an effect by targeting the specific psycho-spiritual needs of patients with advanced cancer and enhancing a sense of meaning, peace, and purpose as they face an advanced diagnosis [11,12,13], while CST targets communication skills with patients and caregivers coping with cancer [14].
William Breitbart developed Meaning-Centered Psychotherapy (MCP) as an intervention to address the existential distress often experienced by patients with advanced cancer [13]. What differentiates MCP from other types of psychotherapy is its direct approach to identifying sources of meaning in the patient’s life through a set list of strategies grounded in the work of Viktor Frankl [13]. A clinical trial comparing Individual Meaning-Centered Psychotherapy (IMCP), Supportive Psychotherapy (SP), and Enhanced Usual Care (EUC), or standard care, showed IMCP had significant treatment effects compared to EUC and some modest differences when compared to SP [13]. Patients with advanced cancer are not the only ones who could benefit from MCP. Informal caregivers are also at risk of suffering distress from anxiety, depression, and existential concerns, including “guilt, issues with role changes, sense of identity, and responsibility to the self [15,16]”. A caregiver-focused MCP intervention addresses existential burdens [15,16] and has been found to be feasible and acceptable [17].
Communication between patients and their caregivers is crucial after a diagnosis of advanced cancer, as themes regarding the patient’s values and end-of-life care may surface [18], leading to issues involving death [19], which is still taboo in Hispanic/Latino society [20] and distressing for patients. Given that Hispanics/Latinos are a heterogeneous culture, there are many reasons why death is a taboo subject: fear of expediting the process [21], denial [18], religious matters [21], and sociocultural factors [22]. Skillfully navigating this initial conversation requires a high dexterity in communication on behalf of the provider [23,24]. Some recent studies have focused on communication coaching for patients before appointments with providers [25,26,27]. However, a lesser-studied element is the communication between the patient and their family caregiver. Patient-family communication is an integral part of adapting to the new diagnosis, as family members may take on new roles as informal caregivers and patients adjust to a newly uncertain future. Because dysfunctional communication can be a source of distress for both members of this unique dyad, recent studies have focused on communication skills between patient and caregiver [28,29], and interventions that address communication are being developed [14].
A meta-analytic review has shown that culturally adapted treatments tailored for a specific cultural group are four times more effective than interventions provided to participants from a variety of cultural backgrounds, and those conducted in Latino participants’ native language are twice as effective as interventions conducted in English [30]. Though several psychotherapeutic interventions are designed for advanced cancer patients [11,12,13,31,32,33,34,35,36], only one has been adapted for Latino patients. Nevertheless, interventions have yet to be explicitly adapted for Latino patients and caregivers.
Literature underscores the importance and impact of MCP and communication for advanced cancer patients and their caregivers. Moreover, it highlights the need for culturally adapted interventions. The team used a quantitative approach with patients and caregivers coping with advanced cancer to identify the accepted concepts of Meaning-Centered Psychotherapy and Communications Skills Training. This paper aims to evaluate the importance of MCP concepts and communication concepts among Latino patients and caregivers coping with cancer. The results of this study will be used to inform the topics to be discussed in the psychosocial intervention for advanced cancer patients and their informal caregivers.
## 2. Materials and Methods
Meaning-Centered *Psychotherapy is* grounded in the work of Dr. Breitbart and aims to target the specific psycho-spiritual needs of patients with advanced cancer [11]. Its primary goal is to help patients enhance a sense of meaning, peace, and purpose as they approach the end of life. The intervention focuses on meaningful concepts such as: maintaining hope, making sense of the cancer experience, having a purpose in life, reflecting on their heritage, having a purpose in life after a cancer diagnosis, changing their attitude, and being responsible for themselves and others after the cancer diagnosis. Moreover, the intervention addresses experiential sources of meaning, such as love, humor, and beauty.
Using the Communication Skills Training approach [14], the team hypothesizes that the MCP intervention will be enhanced by/including taught coping skills. The coping skills training approach was adapted for non-spousal patients’ caregivers by eliminating spousal terms (e.g., taking care of your partner–spouse) and using general caregiving terms (e.g., taking care of your significant other). The concepts related to CST involve learning how to share thoughts about cancer, express feelings regarding a cancer diagnosis, learn strategies to accept others’ perspectives, and acquire communication strategies to accept and validate others [14].
Participants for this study comprised patients and caregivers who were recruited as dyads from an oncology clinic in the southern area of Puerto Rico between October 2020 and September 2021. The Ponce Research Institute Institutional Review Board (IRB) and Ethical Committee approved all the study procedures. An IRB-approved introductory letter to familiarize potential participants with the study. Patients’ inclusion criteria included: [1] patients with solid stage III or IV tumors, [2] age 21 or older, and [3] self-reported Latino. Eligible family caregivers included those who were: [1] a caregiver with a family member diagnosed with solid stage III or IV tumors referred by the advanced cancer patient, [2] age 21 or older, and [3] self-reported Latino. Patients’ exclusion criteria included: [1] diagnosed with a major disabling medical or psychiatric condition, [2] unable to understand the consent procedure, or [3] too ill to participate, reported by the patient and determined by the PI’s judgment. After completing the screening process, those eligible and interested were consented and scheduled to complete the questionnaire. Following informed consent, patients, and family caregivers (FCs) were assigned a subject number and administered the survey and self-report assessment to evaluate the patients’ and FCs’ perspectives and psychosocial needs.
The cross-sectional survey in Spanish included rating the importance of the goals and concepts of MCP and CST. In addition, the survey included general demographic questions (age, education, and gender) and a series of standardized scales described in the protocol paper [37]. Participants were given USD 15 as compensation for their time and effort.
All analyses were conducted using IBM SPSS Statistics 21. The database was checked for coding errors and missing data using descriptive statistics. The analyses included descriptive statistics and frequency analysis for the survey to rate the importance of the goals and concepts of MCP and CST. The study was properly powered to use the findings in this formative work. The G Power statistical program [38] was used to determine the sample size. The study had a power of 0.80 ($p \leq 0.05$) to detect a medium-sized effect (Cohen’s $d = 0.50$) [38]. Based on the analysis, the team recruited a sample of 114 participants (57 advanced cancer patients; 57 family caregivers).
## 3. Results
Fifty-seven ($$n = 57$$) Latino cancer patients with stages III ($38.5\%$) and IV ($61.4\%$) cancer participated. Most patients ($57.9\%$) and caregivers ($71.9\%$) were married. On average, patients were 63 and caregivers were 56 years old. Most patients were male ($57.9\%$), and most caregivers were female ($67.9\%$). The predominant cancer diagnoses were cervical ($17.5\%$), breast ($14.0\%$), and prostate ($14.0\%$). The sociodemographic and diagnostic characteristics are included in Table 1.
## 3.1. Meaning Centered Psychotherapy Concepts: Dyads
When asked about MCP concepts, the majority of participants rated the concepts as extremely important, ranging from 73.75 to $95.5\%$. Participants ($95.5\%$) ranked the concept of their “love for loved ones” as extremely important when coping with a cancer diagnosis. Participants ($91.2\%$) valued the concept of “maintaining hope” as extremely important. Many participants ($89.5\%$) ranked the concept of “being responsible for themselves after cancer diagnosis” as extremely important. Most participants ($89.5\%$) rated the concept of their “love for life” extremely important when coping with a cancer diagnosis. Participants ($87.6\%$) ranked the concept of “finding beauty in music, nature, and other life experience” as extremely important. Regarding the concept of the “care of others after a cancer diagnosis”, $86\%$ of participants rated it as extremely important. Most participants ($81.6\%$) ranked the concept of “understanding their life’s purpose after a cancer diagnosis” as extremely important. Additional concepts were ranked as extremely important: ”reflecting or thinking about their changes after a cancer diagnosis” ($79.8\%$), “changing or adjusting their attitude when circumstances are out of their control” (77.25), “creating meaning in life or thinking about their purpose” ($78.1\%$), “reflecting on heritage and thinking about their life’s contributions” ($75.4\%$), and “making sense of the cancer experience” ($73.7\%$) (see Table 2 for more details). Most participants ($86.8\%$) chose the concept of “finding meaning in their life after a cancer diagnosis”. Moreover, $80.7\%$ of participants selected the concept of “finding and maintain hope to cope with their cancer diagnosis” (see Table 3).
## 3.2. Meaning Centered Psychotherapy Concepts: Patients
When asked about MCP concepts, most patients rated the concepts as extremely important, ranging from $75.4\%$ to $94.7\%$. Patients ($94.7\%$) ranked the concept of their “love for loved ones” as extremely important when coping with a cancer diagnosis. Patients ($93\%$) assessed the concept of “maintaining hope” as extremely important. Many patients ($91.2\%$) ranked the concept of “persevering a sense of humor has helped them cope with their cancer diagnosis” as extremely important. Most patients ($89.5\%$) rated the concepts of their “love for life”, “being responsible for themselves after cancer diagnosis”, and “finding beauty in music, nature, and other life experience” as extremely important when coping with a cancer diagnosis Regarding the concept of the “care of others after a cancer diagnosis”, $82.4\%$% of patients rated it as extremely important. Additional concepts were ranked as extremely important:” create meaning in life or think about their purpose in life” ($78.9\%$), “changing or adjusting their attitude when circumstances are out of their control” ($78.9\%$), “understand their life’s purpose after being diagnosed with cancer” ($77.2\%$), “making sense of the cancer experience” ($77.2\%$), “reflect or think about how they have changed after a cancer diagnosis” ($75.4\%$), and “reflect on their heritage or thinking about what they have contributed with their life” ($75.4\%$) (see Table 4 for more details). Most patients ($84.2\%$) chose the concept of “Find meaning in life after a cancer diagnosis”. Moreover, $80.7\%$ of patients selected “finding and maintaining hope to cope with their cancer diagnosis” (see Table 5).
## 3.3. Meaning Centered Psychotherapy Concepts: Caregivers
When asked about MCP concepts, the majority of caregivers ranked the concepts as extremely important, ranging from 70.2 to $96.4\%$. Caregivers ($96.4\%$) rated the concept of their “love for loved ones” as extremely important when coping with a cancer diagnosis. Caregivers ($89.5\%$) also rated the concepts of “maintaining hope”, “being responsible for themselves after a cancer diagnosis”, “love for life has helped them cope after a cancer diagnosis”, and “taking care of others after a cancer diagnosis” as extremely important. Most caregivers ($86\%$) ranked the concept of “understanding their life’s purpose after a cancer diagnosis” as extremely important. Caregivers ($84.2\%$) ranked “finding beauty in music, nature, and other life experience” as extremely important. Regarding the concept of “reflect or think about how they have changed after a cancer diagnosis”, $84.2\%$ of caregivers rated it as extremely important. Many caregivers ($83.9\%$) rated “preserving a good sense of humor” as extremely important when coping with a cancer diagnosis. Additional concepts were ranked as extremely important: “creating meaning in life or thinking about their purpose” ($77.2\%$), “changing or adjusting their attitude when circumstances are out of their control” ($75.4\%$), “reflecting on heritage and thinking about their life’s contributions” ($75.4\%$), and “making sense of the cancer experience” ($70.2\%$) (see Table 6 for more details). Most caregivers ($89.5\%$) chose the concept of “finding meaning in their life after a cancer diagnosis”. Moreover, $80.7\%$ of caregivers selected “finding and maintaining hope to cope with their cancer diagnosis” (see Table 7).
## 3.4. Communication Skills Training Concepts:Dyads
When asked about communication skills training concepts and skills, most participants wanted to learn more, ranging from $81.6\%$ to $91.2\%$. Most participants ($91.2\%$) chose that they would like to acquire problem-solving skills, and $90.4\%$ favored the concept of “wanting to learn that they worry about each other”. A large portion ($89.5\%$) favored the concept of “learning ways to show they are accompanying each other in the process”. Participants ($86.6\%$) also favored the concept of “learning to talk about their thoughts regarding cancer”. Eighty-six percent ($86\%$) of participants favored the concept of “acquiring communication strategies to accept and validate others”. Furthermore, $85.8\%$ of participants chose the concept of “wanting to express their feelings about cancer” and $85.1\%$ selected the concept of “acquiring communication skills to accept other people’s feelings”. Many participants ($82.5\%$) favored the concept of “learning communication strategies to accept other people’s perspectives”. Finally, $81.6\%$ of participants selected the concept of “reviewing their life and considering their heritage” (see Table 8 for more details).
## 3.5. Communication Skills Training Concepts: Patients
When asked about communication skills training concepts and skills, most patients wanted to learn more, ranging from $80.7\%$ to $91.2\%$. Most patients ($91.2\%$) selected that they would like to acquire problem-solving skills. A large portion ($89.5\%$) favored the concepts of “wanting to learn that they worry about each other”, “learning ways to show they are accompanying each other in the process”, “learning to talk about their thoughts regarding cancer”, and “wanting to express their feelings about cancer”. Patients ($87.7\%$) also favored the concepts of “acquiring communication strategies to accept and validate others” and “acquiring communication skills to accept other people’s feelings”. Furthermore, $84.2\%$ of patients chose the concept of “learning communication strategies to accept other people’s perspectives” Finally, $80.7\%$ of patients selected the concept of “reviewing life and considering their heritage” (see Table 9 for more details).
## 3.6. Communication Skills Training Concepts: Caregivers
When asked about communication skills training concepts and skills, most caregivers wanted to learn more, ranging from $80.7\%$ to $91.2\%$. Most caregivers ($91.2\%$) chose that they would like to acquire problem-solving skills and favored the concept of “wanting to learn that they worry about each other”. A large portion ($89.5\%$) favored the concept of “learning ways to show they are accompanying each other in the process”. Caregivers ($87.7\%$) also favored “learning to talk about their thoughts regarding cancer”. Additionally, caregivers ($84.2\%$) selected “acquiring communication strategies to accept and validate others”. Moreover, $82.5\%$ of caregivers chose the concepts of “acquiring communication skills to accept other people’s feelings” and “reviewing their life and considering their heritage”. Many ($82.1\%$) favored the strategy of “wanting to express their feelings about cancer”. Finally, $80.7\%$ of caregivers selected the concept of “learning communication strategies to accept other people’s perspective” (see Table 10 for more details).
## 4. Discussion
When patient and caregiver dyads were asked about the concepts of MCP and CST, the majority of participants favorably rated all of the concepts. The acceptance of MCP concepts ranged from $73.75\%$ to $95.5\%$, while CST ranged from $81.6\%$ to $91.2\%$. Comparable results were seen in the adaptation of MCP for a Latino population, where patients expressed a need to integrate communication skills as well as accepted MCP concepts in the process of adapting to their cancer diagnosis [37,39]. Some of the many MCP concepts included finding meaning in family and loved ones, maintaining hope, taking responsibility to care for oneself, finding meaning in life after a diagnosis, maintaining a love for life, and preserving a sense of humor. Moreover, the literature acknowledges the efficacy of interventions designed to improve dyadic communication among cancer patients and caregivers [40]. However, studies with Latino patient-caregiver dyads are lacking. A portion of CST concepts includes having problem-solving skills, worrying about each other, demonstrating companionship through the journey, learning to talk about a cancer diagnosis, and acquiring communication strategies.
Results indicate that participants favored love for their loved ones to cope with their diagnosis, which is consistent with studies that underscore how many patients lean on family for support as a coping mechanism during a cancer diagnosis [41]. Additionally, family is an important value to Latinos [42], which could explain why many participants consider it important to take care of others after a cancer diagnosis. Latino patients have reported the desire for assistance in finding hope and meaning in life [43]. Given that participants were also caregivers, many regarded maintaining hope as essential. These results are congruent with literature where caregivers used hope and prayer while caring for a family member with cancer [20]. However, while patients may use hope as a coping mechanism, it can become a difficult topic when discussing end-of-life [19]. The current literature regarding Latino cancer patients and meaning highlights the use of positive reframing and meaning to cope with a cancer diagnosis [41]. Additionally, in the same study, some of the participants integrated the value of life with purpose into their experience with cancer. These results are congruent with the participants’ selection of the concepts of finding meaning, creating meaning, and finding purpose in their life. MCP attempts to assist participants in the search for meaning and purpose through experiential sources of meaning. Moreover, participants favor the discussion of “making sense”, which is seen in advanced cancer patients as an attempt to make sense of and understand the terminality of an advanced cancer diagnosis experience [44].
Many of the participants indicated that reflecting on the changes in one’s life after receiving a cancer diagnosis was important. Even though the MCP concept of change after a diagnosis focuses on general life, hope, and experiences, Latino participants might also reflect on changes attributed to physical changes [45,46], sexuality [46], work [47], or overall quality of life [48]. Regarding responsibility for oneself after a cancer diagnosis, many participants found this to be required within the cancer trajectory. These results are seen in the literature where Latino cancer patients take responsibility for their part in the cancer trajectory [49]. Concerning humor as a coping mechanism, a study with Hispanic male cancer survivors yielded how the survivors used humor as one of many coping mechanisms during their diagnosis and treatment process [50]. Even though our sample includes men, women, patients, and caregivers, most selected humor to cope with their diagnosis.
Some Latinos would rather not discuss the end-of-life stage [18] or death [19]; however, dyads within this study ranked different communication skills as essential. These results could be attributed to integrating cultural factors and values in adapting interventions [51]. The integration of cultural values within interventions has been shown to be successful [52]. For instance, communication interventions aimed at Latinos and their caregivers with a chronic illness (diabetes) yield positive results. Firstly, dyads with good relationships had better care routines, considered the program successful in managing the disease at home, and had better social support [53]. Couples’ CST results underscore the benefits of communication between the advanced cancer patient and partner. Some benefits include: the desire not to be seen as a “patient” and “caregiver,” symptom management, support for a partner, decision making, conflict resolution, and preparation for death [14]. These results are seen within the team’s sample, with participants favoring problem-solving skills and companionship throughout the process. Delivering tailored communication interventions proves to be acceptable and beneficial for the patient and caregiver [54]. Thus, it is highly imperative that caregiver–patient dyads are provided with the necessary skills to discuss thoughts about cancer, express their feelings about cancer, and acquire the necessary communication skills they might need in their daily lives.
## 5. Conclusions
Results within this study show how advanced cancer patients and caregivers favor Meaning-Centered Psychotherapy and Communication Skill training concepts. Existing literature also aids in showing how patients favor these concepts independently and when integrated into an MCP intervention or a communication-based intervention. These results highlight the importance of integrating both patient and caregiver perspectives into the development and application of a culturally adapted psychosocial intervention.
## 6. Limitations
The instrument used in the study, which contained Meaning-Centered Concepts and Communication Skills Training, was developed in a questionnaire manner; therefore, analyses were limited to a descriptive approach. As a result, inferential analyses cannot be measured. If a scale were to be devised, the analysis could explore statistical differences between patients and caregivers, as well as sex, income, and clinical variables. |
# Association between Smoking and Periodontal Disease in South Korean Adults
## Abstract
Smoking poses a threat to global public health. This study analyzed data from the 2016–2018 National Health and Nutrition Examination Survey to investigate smoking’s impact on periodontal health and identify potential risk factors associated with poor periodontal health in Korean adults. The final study population was 9178 patients, with 4161 men and 5017 women. The dependent variable was the Community Periodontal Index (CPI), to investigate periodontal disease risks. Smoking was the independent variable and was divided into three groups. The chi-squared test and multivariable logistic regression analyses were used in this study. Current smokers had a higher risk of periodontal disease than non-smokers (males OR: 1.78, $95\%$ CIs = 1.43–2.23, females OR: 1.44, $95\%$ CIs = 1.04–1.99). Age, educational level, and dental checkups affected periodontal disease. Men with a higher number of pack years had a higher risk of periodontal disease than non-smokers (OR: 1.84, $95\%$ CIs = 1.38–2.47). Men who quit smoking for less than five years had a higher risk of periodontal disease than non-smokers but lower than current smokers (current OR: 1.78, $95\%$ CIs = 1.43–2.23, ex OR: 1.42, $95\%$ CIs = 1.04–1.96). Those who had quit smoking for less than five years had a higher risk of periodontal disease than non-smokers but lower than current smokers (males OR: 1.42, $95\%$ CIs = 1.04–1.96, females OR: 1.11, $95\%$ CIs = 1.71–1.74). It is necessary to motivate smokers by educating them on the importance of early smoking cessation.
## 1. Introduction
Smoking is one of the biggest threats to public health [1]. According to the World Health Organization, more than 8 million people have been killed, including approximately 1.2 million deaths from exposure to secondhand smoke [2]. Moreover, since tobacco has more than 7000 toxic chemicals [3], smoking is associated with numerous preventable chronic diseases [4]. In Korea, the smoking rate has been decreasing; however, as of 2018, the prevalence of daily smoking among men in Korea reached $30.5\%$, the third-highest rate among the Organization for Economic Co-operation and Development (OECD) members [5]. The authorities have made intensive efforts to eliminate tobacco use by implementing strong and effective tobacco control policies and measures, such as cigarette tax hikes and media campaigns [4,6].
The association between smoking and various diseases, including major causes of death, has been well-established. A cohort study in the US reported that smokers had a higher risk of developing bladder cancer and pancreatic cancer than non-smokers [7]. Another study found that smokers were more likely to have elevated levels of blood insulin and triglycerides compared to non-smokers [8,9].
Smoking can negatively impact the oral cavity, particularly in non-inflammatory oral diseases [10]. Harmful substances in tobacco products, such as nicotine, can harm the gingival tissue, decrease blood flow to the gums, and compromise the immune system [11]. Tobacco use can increase susceptibility to oral infections, stain teeth, cause dryness in the mouth, and delay the healing of oral wounds [12].
Periodontal diseases are considered to be chronic destructive inflammatory diseases [13]. They are characterized by the destruction of the periodontal tissue, loss of adhesion to connective tissues, loss of alveolar bone, and the formation of pathological sacs around the teeth [14,15,16]. In addition, poor periodontal health is associated with systemic diseases, such as cancer, heart disease, and diabetes; therefore, management is important [17,18,19]. Previous studies have shown that smoking is associated with poor periodontal health, even among young adults [20]. Another study in Korea revealed that quitting smoking within a decade could potentially improve periodontal health for former smokers [21]. A study in the US, which used large-scale data, concluded that smoking is a significant risk factor for periodontitis and may account for more than $50\%$ of periodontitis in adults [22].
While previous studies have examined the association between smoking and periodontal diseases, additional evidence is needed to encourage healthy habits that promote smoking cessation. This study aimed to investigate the relationship between smoking and the risk of periodontal diseases in Korean adults, using a nationwide cross-sectional survey with a large sample size. Furthermore, this study aimed to provide more robust evidence for the importance of early smoking cessation by analyzing the relationship between smoking cessation in five-year intervals, which was more detailed than in previous studies.
## 2.1. Data
The data for this study were obtained from the 2016–2018 Korea National Health and Nutrition Examination Survey (KNHANES) and used a separate raw dataset (HNYN_OE). The KNHANES has been conducted by the Korea Disease Control and Prevention Agency (KDCA) since 1998 to investigate national statistics through a survey of the health level, health-related behavior, and nutritional status of 10,000 Koreans annually. The KDCA Research Ethics Review Board approved the data collection protocols for the KNHANES. The data are available for download from the KDCA website (https://knhanes.kdca.go.kr/knhanes/sub03/sub03_02_05.do, accessed on 1 January 2023). Thus, this study did not need extra approval from the ethics review board. The KNHANES is a self-reported survey using a stratified, two-stage, clustered sampling design conducted annually for South Koreans of all ages, divided into three age groups: (children: 1–11 years old, adolescents: 12–18 years old, and adults: 19 years or older).
## 2.2. Study Population
The total number of participants who completed the health examination survey for KNHANES 2016–2018 was 16,489 (7485 males and 9004 females). The exclusion criteria consisted of three categories: (a) under 19 years of age ($$n = 3299$$), (b) unable to perform oral examination due to tooth loss ($$n = 2581$$), and (c) missing values in health assessment or survey ($$n = 1440$$). The final study population was 9178, with 4161 men and 5017 women (Figure 1).
## 2.3. Variables
The dependent variable in this study was the Community Periodontal Index (CPI), used to measure the risk of periodontal disease. The oral health examinations were conducted by public health dentists and local public health dentists at the city and provincial levels under the supervision of the Korea Disease Control and Prevention Agency (KDCA). The risk to periodontal health was assessed by dividing the upper and lower jaws into three sections and recording the highest CPI score for each section. The CPI score was based on periodontal pocket depth, calculus attachment, and gingival bleeding measurements. The scores ranged from 0 to 4, with 0 indicating healthy, 1 indicating bleeding, 2 indicating dental calculus, 3 indicating a superficial periodontal pocket of 4–5 mm, and 4 indicating a deep periodontal pocket of 6 mm or more. Using the sum of the CPI scores, we assessed the risk of periodontal disease as the outcome variable.
The independent variable was the smoking status, classified into three groups: non-smokers, ex-smokers, and current smokers. Smoking status was based on the question, “Do you currently smoke cigarettes?”. We also used pack years and smoking cessation status as variables in the subgroup analysis. Pack years indicate the number of cigarettes a person has smoked in their lifetime, calculated by multiplying the total number of cigarettes smoked per day by the total number of years a person smoked.
The covariate variables were controlled for, as potential confounding factors. These included socioeconomic factors, such as sex, age, household income, and region, and factors related to health behaviors, such as current drinking status and physical activity. Oral health habits were also included as covariates. Teeth brushing frequency was investigated, based on the number of times teeth were brushed during the previous day, while dental checkup status was surveyed based on the question, “Did you have a dental checkup in the past 12 months?”.
## 2.4. Statistical Analysis
A chi-squared test was conducted to explore the general characteristics of the study population. *The* general characteristics of the final study population were represented as frequency and percentage. To assess the relationship between smoking and periodontal disease using the sum of the CPI scores in adults, we used multivariable logistic regression analysis with covariate adjustment. Subgroup analyses were performed to evaluate the relationship between pack years, smoking cessation status, and periodontal disease. All the results were presented as odds ratios (ORs) and $95\%$ confidence intervals (CIs). The analyses were performed using stratified sampling variables. All the estimates were estimated using weighted variables to generalize the data. SAS version 9.4 software (SAS Institute, Cary, NC, USA) was used for all the statistical analyses. Statistical significance was determined as a two-sided p-value of <0.05.
## 3. Results
Table 1 summarizes the characteristics of the study population, classified according to sex. Of the 9178 participants, 4161 were male ($45.3\%$), and 5017 were female ($54.7\%$). A total of 3042 ($73.1\%$) males and 3143 ($62.6\%$) females had periodontal disease risks, as expressed by the CPI. Among the males, 1484 ($35.7\%$) were current smokers, 1623 ($39.0\%$) were ex-smokers, and 1054 ($25.3\%$) were non-smokers. Among the females, 282 ($5.6\%$) were current smokers, 344 ($6.9\%$) were ex-smokers, and 4391 ($87.5\%$) were non-smokers.
Table 2 presents the multivariate logistic regression analysis results that explore the association between smoking and periodontal disease while adjusting for covariates. The smokers had a higher risk of periodontal disease than the non-smokers. While the ex-smokers were statistically insignificant, the current smokers were significant for males (OR: 1.78, $95\%$ CIs = 1.43–2.23) and females (OR: 1.44, $95\%$ CIs = 1.04–1.99). As age increased, the participants showed an elevated risk of periodontal disease. The participants with a middle school education or lower had a higher risk of periodontal disease than those with a college education (males OR: 1.63, $95\%$ CIs = 1.15–2.23, females OR: 1.59, $95\%$ CIs = 1.18–2.14). The individuals who did not receive dental checkups were likelier to have periodontal diseases (males OR: 1.62, $95\%$ CIs = 1.36–1.93, females OR: 1.60, $95\%$ CIs = 1.39–1.84).
Table 3 presents the results of the subgroup analysis for the independent variables stratified by smoking behavior. Most of the ex-smokers did not have significant results. The observed results were more significant in the males than in the females. The risk of periodontal disease generally increased with age in men who are current smokers but was not statistically significant in their 50s. The current smokers had a higher risk of periodontal disease in all education levels, and the risk was highest for those with middle school education or lower (OR: 3.15; $95\%$ CIs = 1.37–7.21). The current smokers had a risk of periodontal disease, regardless of their physical activity status (male, adequate: OR = 1.83, $95\%$ CIs = 1.35–2.50; inadequate: OR = 1.77, $95\%$ CIs = 1.30–2.42). Similarly, regardless of whether they received regular dental checkups, current smokers had a higher risk of periodontal disease (male, checkups: OR = 1.90, $95\%$ CIs = 1.40–2.59; no checkups: OR = 1.73, $95\%$ CIs = 1.29–2.33).
The results of the subgroup analysis, which were stratified by pack years and smoking cessation, are presented in Table 4. The males showed a statistically significant positive association. Those with a higher number of pack years had a higher risk of periodontal diseases than the non-smokers (over 20 pack years OR: 1.84, $95\%$ CIs = 1.38–2.47). Those who had quit smoking for less than five years had a higher risk of periodontal disease than the non-smokers but lower than the current smokers (males OR: 1.42, $95\%$ CIs = 1.04–1.96; females OR: 1.11, $95\%$ CIs = 1.71–1.74).
## 4. Discussion
Despite the reduction in smoking prevalence over the past 30 years, the total number of smokers has increased from 0.99 billion in 1990 to 1.14 billion in 2019 worldwide, due to population growth [23]. The American Academy of Periodontology has pointed out that smoking negatively impacts the healing and treatment of periodontitis [24]. The purpose of the study was two main issues. First, we used a nationwide survey with a large sample size to investigate the association between smoking and periodontal disease. Second, we attempted to support the importance of early smoking cessation by analyzing the relationship between smoking cessation in five-year intervals compared to the previous studies using ten-year intervals.
The mechanisms underlying the association between smoking and periodontal disease were the following. Smoking stimulates the establishment of pathogenic microflora, diminishes the immune host response, and elevates the release of inflammatory mediators [14,15,16,25]. As smokers are more likely to absorb pathogenic microorganisms than non-smokers, previous studies have reported an increase in particular pathogens in smokers, such as Actinobacillus actinomycetemcomitans and Bacteroides forsythus, although the pathogen levels may have varied, based on the methods used in the studies [14,26,27]. Smoking can affect host inflammatory and immune responses, such as the immunosuppressive effects of macrophages on cell-mediated immune responses, inhibition of human periodontal ligament fibroblast migration, and repression of alkaline phosphatase production by nicotine [28,29]. As with this mechanism, Table 2 shows current smokers had a higher risk of periodontal diseases than non-smokers. It supports previous studies’ results that smoking is a risk factor for oral health, even among young smokers [20]. Additionally, the results in Table 2 are in the same vein as previous studies, showing that smoking significantly influences periodontitis, using a large sample in the US [22].
Notable points in Table 2 were the results of age, education level, and dental checkup status variables. In the case of age, it was consistent with the results of previous studies that the prevalence of periodontal disease tends to increase as the age of participants increases. Previous studies in Brazil and India have reported that age increases affect the severity and prevalence of periodontal disease, regardless of gender [30,31]. The education level affected periodontal health. Middle school or lower education participants had a higher risk of periodontal disease than those with a college or higher education. As some studies have reported similar findings [22,32], education progressively decreases the risk of periodontal diseases. This finding implies that education regarding periodontal health is important. There was a higher risk of periodontal disease in people who did not undergo oral examinations, which supports previous studies that those who regularly underwent oral examinations had a lower risk of periodontal disease than those who did not [33,34].
In Table 3, male smokers in their 50s were not statistically significant. This counterintuitive finding can be explained by aging, which affects tooth loss [4,35]. Even if good physical activity habits and dental checkups were regularly undertaken, the current smokers had a higher risk of periodontal disease than the non-smokers. This result could explain that current smokers cannot avoid the risk of periodontal disease, even if they have good health habits.
As shown in Table 4, the men with high pack years had a higher risk of periodontal disease. However, the women showed statistically insignificant results. The results can be explained in the WHO Framework Convention on Tobacco Control (WHO FCTC) context. The WHO FCTC emphasized the need to consider gender when developing tobacco control strategies, as perceptions of smoking habits related to gender continue to differ depending on social contexts and cultural norms. Specifically, in Confucian Asian countries, there is still a tendency for views on female smoking to be more conservative than those on male smoking [36]. The smoking rate of women is also increasing in Korea. However, considering the social context, the data on the female smoking rate collected by voluntary reporting may not be accurate, due to the opposing views of some female smokers.
We found that ex-smokers with relatively short smoking cessation periods had a lower risk of periodontal disease than current smokers. This result can be compared to a previous study that reported the possibility of reversing the risk of periodontal disease if an individual quits smoking for ten years [15]. The sentence that smoking is harmful was too clear and simple, but the results of this study tried to support the importance of early smoking cessation. It could motivate smokers to quit smoking by revealing that people who quit smoking for a relatively short period, fewer than five years, have a lower risk of periodontal disease.
This study had several limitations. First, clarifying an inverse causal association was difficult since it was a cross-sectional study. Second, the KNHANES data were collected through a self-reported survey. The data on smoking behavior, health habits, and socioeconomic variables may not be accurately estimated. There was a possibility of recall bias. Third, it was impossible to identify the type of smoking, such as whether the participants used conventional cigarettes, e-cigarettes, or both. In addition, we could not use biological indicators, such as urine cotinine, in the subjects. Therefore, further studies are needed, considering these limitations.
Despite these limitations, our study has several strengths. The main strength of this study is the use of nationally representative, large, and high-quality data. KNHANES was conducted using a random cluster design, which can generalize the study’s results to the general population. Second, oral health examination datasets collected by public health doctors may effectively estimate periodontal disease risks. It was possible to estimate the risk of periodontal disease more precisely by using the CPI score through a doctor’s examination than by using the participant’s subjective oral symptom self-reported survey. Third, our study supported the importance of early smoking cessation. Compared to previous studies, we provided more proactive evidence for the importance of early smoking cessation.
## 5. Conclusions
This study demonstrated a strong association between smoking and periodontal disease in South Korean adults. Long-term smoking was closely related to poor periodontal health. The findings that even a relatively short period of smoking cessation, less than five years, had a positive impact on periodontal disease could be a powerful motivator for smokers. There is a need for effective tobacco control measures to reduce the prevalence of periodontitis. Tailored smoking cessation policies and educational interventions, highlighting the benefits of short-term smoking abstinence in reducing periodontal disease risk, could encourage current smokers to quit and ultimately improve public oral health. |
# Relationship between Behavioural Intention for Using Food Mobile Applications and Obesity and Overweight among Adolescent Girls
## Abstract
Changes in the body mass index (BMI) of children and adolescents have been linked to mobile usage, particularly food applications. This study aimed to investigate the relationship between food application usage and obesity and overweight among adolescent girls. This cross-sectional study was conducted among adolescent girls aged 16–18 years. Data were collected using a self-administered questionnaire from female high schools in five different regional offices across Riyadh City. The questionnaire included questions regarding demographic data (age and academic level), BMI and behavioural intention (BI) scale comprising three constructs: attitude towards behaviour, subjective norms and perceived behavioural control. Of the included 385 adolescent girls, $36.1\%$ were 17 years old, and $71.4\%$ had normal BMI. The overall mean BI scale score was 65.4 (SD 9.95). No significant differences were observed between overweight or obesity in relation to the overall BI score and its constructs. A high BI score was more associated with participants studying in the east educational office than those who were enrolled in the central educational office. Behavioural intention to use food applications greatly influenced the adolescent age group. Further investigations are necessary to determine the influence of food application services among individuals with high BMI.
## 1. Introduction
Childhood obesity is one of the most challenging global health issues of the 21st century [1]. It has been established that body weight during childhood directly impacts an individual’s lifelong health [2]. According to the World Obesity Atlas 2022, published by the World Obesity Federation, one billion people will be obese by 2030. This includes one in five women and one in seven men [3]. Globally in 2016, over 340 million children and adolescents aged up to 19 years were overweight or obese [4]. Adolescents, a group comprising individuals who are 10–19 years of age, are of interest as they undergo various physical, sexual, psychological and social developmental changes [5]. Contrary to popular belief that individuals in this age group are often healthy, adolescents face several health issues and are at high risk of being overweight and obese until their adulthood and developing noncommunicable diseases (NCDs) like diabetes and cardiovascular diseases at a young age [4,5]. The body mass index (BMI) is used to diagnose childhood overweight and obesity. Overweight is defined as having a BMI at or above the 85th percentile and below the 95th percentile for children and adolescents of the same age and sex, whereas obesity is defined as having a BMI at or above the 95th percentile for children and teens of the same age and sex [6]. Increasing levels of overweight and obesity pose some risks to the health and well-being of children and adolescents [7]. Technology has led to an increase in overweight and obesity [8]. As a type of technology used to deliver food online, food apps may pose serious health risks. Therefore, prevention of weight gain among children and adolescents requires taking food apps into consideration as risk factors. Further research is required to understand these implications.
## 1.1. Literature Review
Obesity is a major problem in the USA, with its prevalence being higher in adolescents than in children of other age groups. According to the Center for Disease Control and Prevention (CDC), the prevalence of obesity is $13.9\%$ among 2–5-year-olds, $18.4\%$ among 6–11-year-olds and $20.6\%$ among 12–19-year-olds [9]. It is known that being overweight and obese status are caused by a variety of factors, including eating patterns, lack of sleep or physical activity, certain medications, and genetic factors. [ 10]. A systematic review revealed the association between friendship networks and obesity-related behaviours among adolescents [11]. A study in Iraq revealed that overweight and obesity were higher among females than among male adolescents [12]. In Saudi Arabia, the overall prevalence figures of overweight and obesity were $13.4\%$ and $18.2\%$, respectively, and when compared with the WHO-based national prevalence rate of obesity reported in 2004 (≈$9.3\%$), the obesity rate has doubled in 10 years [13]. Also, the estimated prevalence rates of overweight and obesity among school-aged children were $19.6\%$ and $7.9\%$, respectively, and high rates were reported for adolescents ($26.6\%$ and $10.6\%$ for overweight and obesity, respectively) [13]. Many studies have identified overweight and obesity as health issues in Saudi Arabia [14,15,16,17] and observed a direct relationship between obesity and several factors, such as skipping breakfast, excessive calories, consumption patterns, and parental socioeconomic factors [16,17,18,19].
There are many factors that may influence the development of obesity, such as lack of physical activity and digital devices [20]. Digital device use has been associated with many evidence-based benefits, including early learning, exposure to many ideas and knowledge, enhanced social interactions and support and increased access to health promotion information and messages [21]. However, the use of such digital devices can compromise sleep, attention, and learning; increase the incidence of obesity and depression; and expose people to inaccurate, insensitive, or unsafe information [21]. In contrast, a recent study suggests that digital devices can empower parents who are concerned about their children’s overweight and obesity [22]. Studies have found an increased incidence of high BMI in children and adolescents who use digital devices [23,24,25,26,27], including ordering food online. Also, the use of online food ordering systems is increasing [21]. Currently, more than 1.2 billion people use online food delivery systems (OFDs) worldwide. By 2027, the total number of people using platform-to-consumer delivery systems will reach 1517.4 million [28]. Within the past four years, food orders placed through direct restaurant apps or third-party services have increased by $23\%$ in the USA [29]. The convenience of food applications can lead to adverse health outcomes. In addition to being easy to use, these apps provide access to menus and offers available at local restaurants. This demonstrates how technology has a significant impact on lifestyle and wellness. For instance, in the U.S., there is a lack of research to support how digital food ordering affects health and wellness from an individual or public health perspective [29], and also, studies that investigate if the users of food applications are overweight or obese or among overweight or obese adolescents. Additionally, an early report reported that the obesity prevalence is expected to rise from $12\%$ in 1992 to $41\%$ by 2022 for men and from $21\%$ to $78\%$ for women, according to World Health Organization projections for 1992 to 2022 in Saudi Arabia [30]. This highlights the critical gaps in the literature regarding the investigation of the relationship between food apps and obesity among adolescence. More research is needed to explain the underlying mechanisms and provide effective prevention strategies [20].
Considering food apps as risk factors in both age groups helps in applying preventive measures and decreasing weight gain among children and adolescents. In order to fully understand the potential health implications of online food delivery, further research is needed. The current research on the impact of digital use on lifelong health is lacking. As a result, and because of this percentage among female adolescents, it is imperative to predict and recognise target-oriented behaviour, take preventive measures, and raise awareness among young women about their health. Therefore, this study aimed to assess the intended behaviour of using food apps by applying the theory of planned behaviour (TPB).
## 1.2. The Conceptual Framework
The theory of planned behaviour is one of the social psychology theories that are widely used in health promotion activities. TPB can offer a reasonable explanation of the decision-making processes underlying both the intention for and engagement in self-care overweight/obesity-reducing behaviours [31]. It believes that people are rational and their decisions are based on the knowledge available to them. TPB explains and understands environmental and individual factors that influence behaviour. The important determinant of a person’s behaviour is their intention [31]. Three interrelated concepts are described in TBP, and these concepts can serve as factors that define the level of behavioural intention (BI) [32]. First, attitude towards the behaviour (ATB) refers to the individual’s positive or negative evaluation of performing the behaviour [32]. Attitude is perceived as a combination of feelings, beliefs, intentions and perceptions. Second, subjective norm (SN) is viewed as the social pressure upon a person to behave in a certain way [32]. Lastly, perceived behavioural control (PBC) is related to the perceived influence of factors on the behaviour; therefore, it may enhance or hamper certain behaviours [32]. Thus, the intention should be placed at the core of an individual’s behaviour to act upon the three concepts [32]. Accordingly, positive ATB and SN lead to high perceived control PBC and a strong individual intention to perform a positive behaviour [32].
For the current study, using food apps, the attitude towards an action or behaviour predicts that participants might believe that using an app is more convenient. SN focus on the surroundings of the individual, such as family, friends, beliefs, habits or social media advertising that probably influence their decision. The PBC means that adolescents respond to using apps as an easy way to fulfil their diet needs. Because the theory helps predict a positive or negative attitude towards an action, if the three concepts are positive or two of them are, the intention is increased. Therefore, adolescents will be more likely to have the behaviour of using food apps. See Figure 1.
Behavioural intention for using food apps may influence BMI. Attitude towards the behaviour, sociocultural factors and barriers associated with facilitators of adolescent behaviour influence behavioural intention to food app user behaviour. Therefore, recognising the food app intention behaviour as a factor that leads to overweight and obesity will be useful in developing prevention measures. Thus, this study aimed to investigate the influence of food app intention behaviour on adolescent girls in Saudi Arabia using the theory of planned behaviour (TPB). In addition, the current study hypothesised that the attitude towards the behaviour, subjective norm, and perceived behavioural control predict behavioural intention.
## 2.1. Design and Participants
This study adopted a quantitative descriptive design. This design was selected to test the hypothesis of the study. It is a deductive approach where concepts of obesity, overweight and food apps are downed to variables, and their relationship is tested. When an evidence-based conclusion is drawn, generalisations can be extended to a larger population. Given the cross-sectional design of our study, the variables and relationships among them were determined [33]. Data were collected from female high schools in five different educational regions across Riyadh City. This study included female students aged 16–18 years. The required sample size was determined as 383, which was determined on the basis of the estimated population size of 86,704 obtained from the Ministry of Education database that was last updated in June 2016 [34]. Calculations were made using Epinfo version 7.2.3, with a confidence level of $95\%$, a margin of error of $5\%$, an expected frequency of $50\%$ and a design effect of 1.0 in five clusters [35]. This sample size was increased by approximately $15\%$ [415] to compensate for any absenteeism, dropouts or incomplete questionnaire. Thus, the returned sample was 390 after excluding five incomplete questionnaires.
## 2.2. Data Collection
The study was conducted from January to March 2021. Quantitative data were collected by distributing a self-report questionnaire and performing physiologic measurements of the student’s weight and height to evaluate their BMI. In order to ensure the most desirable representative sample, probability sampling was conducted, which entails both clustering and stratified sampling techniques. The sample was divided into five clusters that are distributed based on five regions of Riyadh City. Each regional cluster involved one randomly selected high school. Moreover, stratified sampling was applied to the selection of classrooms. Since female high schools were accessible, all female students were grouped by level (first, second, and third). From each level, one class was selected randomly. All students in the selected classes were recruited [33]. Approximately 79 students in each regional sample cluster were enrolled. Data collection followed specific ethical protocols that involved an explanation of the purpose of the study to the participants and the distribution of questionnaires by researchers, ensuring voluntary participation and an agreement of participation secured. The questionnaires were handled confidentially, and all collected data were manually verified. BMI was measured using a formula after taking the participants’ weight and height. The CDC recommends BMI categorisation for children and teens between ages 2 and 20 years. Therefore, the BMI-for-age percentile growth charts were used. The CDC BMI categorisation for children and teens between the ages of 2 and 20 is as follows: underweight, <$5\%$; healthy weight, $5\%$–$85\%$; at risk of overweight, $85\%$–$95\%$ and overweight, >$95\%$ [36].
## 2.3. Measurements
A 24-item questionnaire that consists of close-ended questions was developed by the researchers. The self-administered questionnaire consisted of four parts. Each part assessed a certain variable. Part one assessed the sociodemographic characteristics of the adolescent. Part two assessed ATB with ten items (i.e., adolescent attitudes towards the use of food apps). Part three assessed SN with four items (i.e., adolescent sociocultural factors). Finally, part four assessed PBC with six items (i.e., barriers and facilitators of adolescent behaviours). In order to measure these variables, a 5-point Likert scale was used for all parts, with positively worded statements and various response options. These options, as a form of frequency that ranges from always to never, an agreement that ranges from strongly disagree to strongly agree and a level that ranges from very high to very low. Positive statements are scored 1–5. Each score item was reported individually [33].
## 2.4. Validity and Reliability
It is necessary to measure the validity and reliability of the research established scale. Content validity evaluates whether questions cover all aspects of the study and whether irrelevant questions will be removed [33]. An empirical method of testing content validity involves techniques to calculate the content validity index (CVI). The CVI was determined by experts ($$n = 7$$) from the field who evaluated each item of the questionnaire. The items were evaluated for their clarity, relatedness, representativeness and appropriateness, and their instructions were suited for the target group. Each item uses a four-point scale (from 1 as not relevant to 4 as very relevant) to determine whether the item is to be approved or rejected [37]. The rating scales were described on the item level (I-CVI) and scale level (S-CVI). I-CVI was measured by I-CVI = (agreed item)/(number of experts). S-CVI was determined by which the scoring average of the I-CVI for all items on the scale (S-CVI/Ave) and the item’s proportion on the scale that scored a scale of 3 or 4 by all experts (S-CVI/UA) [38,39]. In order to test the reliability of the established scale, the scale requires examination of the stability and internal consistency as the best and oldest technique used for consistency. Coefficient alpha interpretation constitutes the normal range of values that is between $0.00\%$ and $1.00\%$, and higher values indicate a greater internal consistency [33]. The established scale demonstrated properties of reliability for the adolescent age group. The internal consistency calculated using Cronbach’s alpha was found to be good (α = 0.84) for 20 items of the entire scale (BI), subscales ATB (α = 0.80) and PBC (α = 0.71). Other categories showed acceptable levels for the subscale SN (α = 0.66). Additionally, the average inter-item correlation calculated for determining the appropriate internal consistency reliability was good (0.42) and showed a positive and had a good item level, ranging between 0.20 and 0.57. The ideal range for item level is considered to be 0.15–0.50, and values over 0.2 are considered acceptable [40,41].
## 2.5. Statistical Analysis
The IBM SPSS for Windows, version 26.0 (IBM Corp., Armonk, NY, USA), was used to analyse all data. Descriptive statistics were presented using numbers, percentages and mean ± standard deviation. Sociodemographic characteristics and BI were compared using the Kruskal–Wallis test, whereas differences in the score of BI and its constructs according to the BMI level were analysed using the Mann–Whitney U-test. Furthermore, Spearman’s correlation coefficient was used to determine the correlation between the BI scale and its constructs. The normality test was conducted using the Shapiro–Wilk test and Kolmogorov–Smirnov test. A $p \leq 0.05$ was taken as significant.
## 3. Results
A total of 385 young girls completed the survey, and five incomplete questionnaires were excluded. As shown in Table 1, the most common age was 17 years ($36.1\%$). Girls enrolled from the north educational office constituted $20.5\%$, of which $36.6\%$ of them are currently in the second year. Additionally, participants with normal BMI were predominant ($71.4\%$). Table 2 shows the mean score of the BI constructs, which are composed of the ATB, SN and PBC. Regarding the ATB, the mean score was highest in the statement ‘I like to use the food app because it is easy and convenient’ (mean score, 3.76), followed by ‘Food apps give various food choices than home meals’ (mean score, 3.67) and ‘Meals that are ordered through food apps are more attractive than home meals’ (mean score, 3.56), whereas it was lowest in the statement ‘I used to order through food apps almost daily’ (mean score, 2.45). The overall mean score was 31.6 (SD 5.82). For the SN, the mean rating was the highest for the statement ‘Food apps advertisement is everywhere’ (mean score, 4.34), whereas it was lowest in the statement ‘Food apps were recommended by my friends’ (mean score, 3.12). The overall mean score for SN was 14.8 (SD 2.49). Finally, for the PBC, the mean score was highest for the statement ‘*It is* easy to order food through food apps because I have a device’ (mean score, 3.97), followed by ‘*It is* easy to order food through food apps because I have internet access all the time’, whereas it was the lowest in the statement ‘*It is* easy to order food through food apps because my mother is busy and cannot cook home meals’ (mean score, 2.30). The overall mean score of the PBC was 18.9 (SD 4.25) and the mean total BI score was 65.4 (SD 9.95). In Table 3, a positive and highly significant correlation was found between BI scores among its constructs, including ATB ($r = 0.870$), SN ($r = 0.682$) and PBC ($r = 0.750$). In addition, we noted a positive and highly significant correlation between attitude towards behaviour according to SN ($r = 0.469$) and PBC ($r = 0.394$). Finally, a positive and highly significant correlation was observed between SN and PBC ($r = 0.369$). In Table 4, no significant correlation was found between the BMI level and the total BI score, including its constructs such as ATB, SN and PBC ($p \leq 0.05$). In Table 5, a higher BI score was more associated with the east educational office, but it was the lowest in the central educational office ($H = 28.813$; $p \leq 0.001$). No significant differences were found in BI according to the age group ($$p \leq 0.099$$) and academic year level ($$p \leq 0.274$$). In Table 6, the post-hoc analysis indicates that the mean differences in BI scores were significant between the south educational office and the east educational office ($$p \leq 0.015$$). Moreover, we found significant differences between the central educational office and the north educational office ($$p \leq 0.002$$), west educational office ($$p \leq 0.007$$) and east educational office ($p \leq 0.001$).
## 4. Discussion
This study investigated the relationship between food apps among adolescent girls who are overweight and obese. To our knowledge, this is the first study in Saudi Arabia-Riyadh city that tested the influence of ordering meals through digital apps on the weight levels of female high school adolescents. We employed the TPB as a tool for measuring behaviour intentions in using digital food apps. The results of this study revealed that the overall BI score has a mean of 65.4 (SD 9.95). Regarding its constructs, the ATB mean score was 31.6, the SN mean score was 14.8, and the PBC mean score was 18.9. The overall scores of the BI and its constructs were above the average of the mean points, suggesting the high behavioural intention to use food apps among the participants. The continuous innovation of the digital world is reflected even in food consumption, and the increased behaviour of utilising food apps was evidently seen in our results. This study adds to the existing discussion on consumer behaviour in the context of digital food delivery in Saudi Arabia and uncovers the elements that could be used to predict people’s motivation to buy food through food delivery apps.
A positive and highly significant correlation was found between the overall BI score and its constructs, suggesting that the increase in the score of each ATB, SN and PBC construct correlated with the increase in the overall BI score. For instance, increasing adolescents’ attitudes, SN or PBC to ordering food through food apps correlated with an increase in overall behavioural intention. Consistent with our findings, Choyhirun et al. [ 2008] found that attitudes, SN and PBC explained up to $41.8\%$ of the variance in intentions [42]. The intentions were influenced most by PBC and then by attitudes and SN. The positive trend of behavioural intentions in using food applications among our youth may lead to overconsumption, which may result in unhealthy food consumption. This scenario may be in accordance with that of Lwin et al. [ 2017] as well as Andrews, Silk and Eneli [2010] [43,44]. According to their reports, the enhanced attitude of children towards eating healthy food is directly influenced by the guidance of parents, decreasing the intention to eat unhealthy food. However, parental mediation of TV advertising negatively affected healthy food attitudes to a greater extent [43,44]. Hence, parental guidance to children is imperative when ordering food through apps to avoid overconsumption of food and an unhealthy lifestyle.
Our results suggest that although participants who were overweight/obese had a slightly higher attitude in using food apps, adolescents with normal/underweight BMI had slightly higher SN, PBC and behavioural intentions; however, this did not yield significance ($p \leq 0.05$). In a study conducted in Thailand among 112 overweight ($$n = 52$$) and obese ($$n = 56$$) young adults, the overall mean TBP score increased significantly from baseline in the health dieting behaviour and SN following group counselling, concluding that group counselling was not inferior to individual counselling and that group counselling is a better option for healthy dieting management [45]. In Greece, between the normal-weight group and overweight/obese group, correlations between variables of TPB and behaviours (healthy eating and exercise) were higher in the normal-weight group than in the overweight/obese group, whereas attitude was a significant predictor for those with higher values in the normal-weight group [46]. On the contrary, several studies have reported a decrease in BMI after educational interventions. For example, Jejhooni et al. [ 2022] reported that before the educational intervention, no significant difference was found in the behavioural intention between the experimental group and control group; after six months of the training intervention, a significant increase was found in each of the TPB constructs, weight and BMI among the intervention group whereas the control group did not differ significantly after the educational intervention [47]. This has been concluded by Sanaeinasab et al. [ 2020], Mazloomy-Mahmoodabad et al. [ 2017] and Soorgi, Miri and Sharifzadeh [2015], revealing significant changes after educational intervention in behavioural intentions and BMI levels specifically towards the experimental group [48,49,50].
No significant differences in BI were found in relation to age and academic year level ($p \leq 0.05$). These findings are similar to those of Jeihooni et al. [ 2022] [47]. No significant difference was found in their study in TBP constructs at baseline between the experimental and control groups in terms of age and education. The study by Alfadda and Masood [2019] correlated overweight and obesity with high levels of parental socioeconomic status and urbanisation in Saudi society [18]. Although caution may be warranted, further investigations should be conducted to determine the effect of behavioural intentions on the sociodemographic data of the overweight and obese populations. One unanticipated finding was the difference between BI, and educational region office location was significant ($p \leq 0.001$). The east educational office was associated with a higher BI score, whereas the central educational office was associated with a lower BI score. A possible explanation for these results may be the lack of adequate data on the financial and employment status of the participants and their parents or living in a location that has a variety of restaurants, which causes a high intrinsic motivation to order using food application. A limitation of this study is that descriptive studies are not helpful in understanding the causes of the phenomena, as the survey method limits the ability to identify causes. Additionally, there is a lack of information regarding the use of food apps by boys in high schools. Despite these limitations, this study can help in testing a newly established scale. Although the scale revealed good reliability and validity characteristics, further explorative and validation studies should be considered to test the newly established scale. Further research is recommended to establish the influence of food app services among individuals with increased BMI levels. It would be a fruitful endeavour for the current findings to be repeated in different contexts for a better understanding of the factors that influence the intended behaviour of food app use in a larger age group. Moreover, this study can serve as a reference guide and baseline for research investigating such topics in the future.
## 5. Conclusions
This study set out to investigate the influence of food app intention behaviour on adolescent girls in Saudi Arabia using the theory of planned behaviour (TPB). This study has found that, generally, the behavioural intention to use food apps greatly influences our sample population. There was a high behavioural intention among the participants to use food apps, as indicated by the overall scores of the BI and its constructs. This was above the average of the mean points. As the digital world develops, so does food consumption, and our results demonstrate that more people are taking advantage of food apps to consume food. Through food delivery apps, we explore the elements that could be used to predict people’s motivations to buy food in the context of digital food delivery in Saudi Arabia. The results indicate that although adolescents with overweight/obese BMI had a slightly higher attitude towards using food apps, adolescents with normal/underweight BMI had slightly higher SN, PBC, and behavioural intentions. Further, evidence suggests that ATB, SN, PBC and overall BI were not directly related to the increased weight of the young population. However, the increase in the overall behavioural intention to use food apps could be associated with participants enrolled in the east educational office but less among those enrolled in the central educational office. |
# Galectin-3 and Blood Group: Binding Properties, Effects on Plasma Levels, and Consequences for Prognostic Performance
## Abstract
Previous studies have reported an association between ABO type blood group and cardiovascular (CV) events and outcomes. The precise mechanisms underpinning this striking observation remain unknown, although differences in von Willebrand factor (VWF) plasma levels have been proposed as an explanation. Recently, galectin-3 was identified as an endogenous ligand of VWF and red blood cells (RBCs) and, therefore, we aimed to explore the role of galectin-3 in different blood groups. Two in vitro assays were used to assess the binding capacity of galectin-3 to RBCs and VWF in different blood groups. Additionally, plasma levels of galectin-3 were measured in different blood groups in the Ludwigshafen Risk and Cardiovascular Health (LURIC) study (2571 patients hospitalized for coronary angiography) and validated in a community-based cohort of the Prevention of Renal and Vascular End-stage Disease (PREVEND) study (3552 participants). To determine the prognostic value of galectin-3 in different blood groups, logistic regression and cox regression models were used with all-cause mortality as the primary outcome. First, we demonstrated that galectin-3 has a higher binding capacity for RBCs and VWF in non-O blood groups, compared to blood group O. Additionally, LURIC patients with non-O blood groups had substantially lower plasma levels of galectin-3 (15.0, 14.9, and 14.0 μg/L in blood groups A, B, and AB, respectively, compared to 17.1 μg/L in blood group O, $p \leq 0.0001$). Finally, the independent prognostic value of galectin-3 for all-cause mortality showed a non-significant trend towards higher mortality in non-O blood groups. Although plasma galectin-3 levels are lower in non-O blood groups, the prognostic value of galectin-3 is also present in subjects with a non-O blood group. We conclude that physical interaction between galectin-3 and blood group epitopes may modulate galectin-3, which may affect its performance as a biomarker and its biological activity.
## 1. Introduction
Cardiovascular (CV) diseases account for $32\%$ of global deaths, and the prognosis of patients with CV disease, particularly in patients with heart failure, remains poor; it is, therefore, important to further investigate disease characteristics and identify risk factors that can serve as therapeutic targets [1,2]. Besides the classical risk factors of heart failure such as hypertension, smoking, dyslipidaemia, obesity, and diabetes mellitus, a sedentary habit, excessive alcohol intake, influenza, certain microbes, cardiotoxic drugs, chest radiation, and coronary artery disease also have to be considered [3,4]. However, the residual risk remains high, and we do not fully understand all factors contributing to CV disease development.
In the past years, the ABO blood group has been identified as a novel and intriguing risk factor for CV disease. Multiple studies have shown an association between non-O blood groups and the risk of different thromboembolic events [5], coronary heart disease [6], the size of a myocardial infarction after an acute coronary syndrome [7], increased mortality in patients with ischemic heart disease [8], and venous thrombosis [9]. The exact mechanisms behind these associations remain unclear to date, but as a possible common mechanism, variable levels and activity of the von Willebrand Factor (VWF) have been proposed. VWF is widely acknowledged as a key determinant in CV homeostasis and has been linked to thrombosis and CV events [10,11]. VWF was also found to be a binding partner of galectin-3 [12].
Galectin-3 is a carbohydrate-binding protein and has been shown to be involved in inflammation, cancer, and CV disease [13,14,15,16,17]. It was shown that galectin-3 is able to modulate VWF-mediated thrombus formation via a direct (physical) interaction with VWF [12]. A possible link between galectin-3 and blood group has been described previously—a genome-wide association study showed that the ABO gene locus was strongly associated with plasma galectin-3 levels [18]. This ABO locus appears to be a very pleiotropic locus that associates with several CV traits [19]. Building upon those findings, we hypothesized that ABO, galectin-3, and VWF would interact and, specifically, that the described associations between galectin-3 and CV outcome [20] can, at least partially, be explained by an interaction with the ABO blood group and VWF levels.
## 2.1. Study Population
The baseline characteristics of patients in the LURIC study are presented in Table 1. The mean age (SD) was 63 [10] years, and the majority of the population was male ($68\%$). Out of the population, 946 ($37\%$) of the patients had blood group O, 1219 ($47\%$) blood group A, 276 ($11\%$) blood group B, and 130 ($5\%$) blood group AB. Additionally, 495 ($19\%$) of the patients were smokers, and a medical history of hypertension and coronary artery disease were very common ($73\%$ and $77\%$, respectively). To validate our findings, we studied the relationship between galectin-3 and the blood group in a community-based cohort, for which we used the PREVEND study. Participants of the PREVEND cohort were younger (mean age 50 ± 12), and sex was equally distributed ($51\%$ male versus $49\%$ female). 1557 ($44\%$) had blood group O, 1606 ($45\%$) blood group A, 271 ($8\%$) blood group B, and 118 ($3\%$) blood group AB. Smoking was common ($46\%$), but hypertension was less abundant compared to the LURIC cohort ($30\%$), as expected (Supplemental Table S1).
## 2.2. Galectin-3 Plasma Levels Stratified by Blood Group
The LURIC cohort was stratified by blood group. Plasma levels of galectin-3 were significantly higher in blood group O compared to other blood groups ($p \leq 0.0001$ for all groups versus blood group O) (Table 1, Figure 1A). Furthermore, VWF levels were significantly lower in blood group O compared to other blood groups (Table 1). In the PREVEND cohort, galectin-3 levels were also significantly different among blood groups and showed the highest values in blood group O compared to other blood groups (Figure 1B, Supplemental Table S1). Moreover, subjects with homozygous blood groups showed a trend towards lower plasma levels of galectin-3 compared to subjects with heterozygous blood groups (Supplemental Figure S1).
## 2.3. Binding of Galectin-3 and Red Blood Cells
Galectin-3 is known to mediate the hemagglutination of red blood cells (RBCs). To further characterize a potential interaction between galectin-3, VWF, and blood group, two different in vitro assays were performed. The first assay was a hemagglutination assay, to examine the interaction between galectin-3 and the blood group, as displayed in Supplemental Figure S2. With this assay we showed that the binding of galectin-3 with RBCs was significantly different between blood groups, with RBCs from blood group O binding less galectin-3 compared to all other blood groups (Figure 2A,B).
## 2.4. Interaction between VWF and Galectin-3
Since galectin-3 has been presented as a partner for VWF, we assessed the binding of VWF with galectin-3 in different blood groups using a VWF-galectin-3 binding assay. Blood plasma with similar levels of VWF (as determined with ELISA) was equalized to similar concentrations with $0.9\%$ NaCl and incubated in a plate coated with galectin-3. Using VWF antibodies, the galectin-3-VWF binding was detected. This assay showed that the binding for galectin-3 to VWF was stronger in all non-O blood groups compared to blood group O (Figure 2C).
## 2.5. Prognostic Value of Galectin-3
We studied the prognostic value of galectin-3 in different blood groups in the LURIC study cohort. During a median follow-up time of 9.8 [8.6–10.4] years, 758 deaths ($29\%$) were observed. Using Cox regression analyses, galectin-3 remained a significant predictor for all-cause mortality, even after multivariate adjustment (HR 1.89 [1.28–2.79] and HR 2.19 [1.67–2.86] in blood group O and blood group non-O, respectively) (Table 2). The HR is higher in non-O blood groups, although galectin-3 plasma levels were lower in these patients (Figure 3A).
We also assessed the prognostic value of galectin-3 among different blood groups in the general population after adjustment for the same variables. In the PREVEND study, the median follow-up time was 12.6 [12.3–12.9] years, and 353 subjects ($10\%$) died during this period. The same trend was observed compared to the LURIC study: galectin-3 appeared to have a higher prognostic value regarding all-cause mortality in non-O blood groups (Figure 3B), although the p for interaction was non-significant. The blood group itself was not an independent predictor for outcome in both the LURIC and PREVEND study cohorts (Supplemental Table S2).
## 3. Discussion
We demonstrate that circulating galectin-3 levels in subjects with non-O blood groups are significantly lower compared to levels in subjects with blood group O. However, the prognostic value of galectin-3 is stronger in subjects with non-O blood groups. As a potential mechanism, we propose that VWF may mediate this, as circulating VWF and galectin-3 were inversely related. We demonstrate that galectin-3 binds stronger to RBCs and VWF of subjects with non-O blood groups compared to subjects with blood group O.
Accumulating evidence suggests that the ABO blood group is involved in the pathogenesis of CV disease and that non-O blood groups had the highest risk of CV disease [19,21]. Previous studies have shown that the presence of non-O blood groups is associated with worse outcomes compared to blood group O [21,22,23,24]. In a recent case-control study consisting of 165 centenarians and 5063 blood donors from the same geographical region, it was observed that among centenarians the prevalence of blood group O was higher ($56.4\%$ vs. $43.5\%$; $$p \leq 0.001$$) [25]. Besides studies that demonstrate a higher CV risk for non-O blood groups, there are a few studies that specifically found the highest risk, for blood group A and blood group AB [6,21,26,27,28,29]. For example, one recent Finnish study found the highest risk of ischaemic heart disease in a patient with blood group A with T1DM and microalbuminuria [27]. A Canadian study ($$n = 64$$,686) demonstrated that blood group AB is associated with an increased risk of thrombotic events in participants from Quebec [28].
The ABO(H) blood group is the most important blood group system and is determined by complex carbohydrate moieties at the extracellular surface of the RBC membrane [30]. The A and B alleles encode for either A- or B-glycosyltransferases that add N-acetylgalactosamine or D-galactose to the common H-glycan precursor backbone, respectively. In subjects with blood group O, no A- or B-transferase activity is present, resulting in the expression of the H-glycan backbone without an additional group [31]. Next to the expression of RBCs, these blood group epitopes and different antigens are also expressed on other cells, such as the vascular endothelium, epithelial cells, T-cells, B-cells, and platelets, and present on molecules such as VWF [32,33].
Several studies described the major effects of the ABO blood group on plasma levels of VWF: plasma VWF levels appear to be $25\%$ lower in the O blood group compared to non-O blood groups [6]. This implies that subjects with blood group O may experience a higher incidence of bleeding events, while subjects with non-O blood groups experience a higher incidence of thrombotic events [34,35]. The exact mechanisms underpinning these observations remain unclear, but this effect may be mediated by VWF. The effect of the ABO blood group on plasma levels of VWF seems to be the result of a direct effect of the ABO blood group [36]. The conversion of the blood group O determinant into other antigens of the ABO blood group was correlated with an increased capacity to modify the N-linked glycosylation of VWF [37]. Therefore, changes in VWF glycan composition also affect the biological activity of VWF and are not restricted to its plasma levels [38]. Carbohydrate structures on the surface of VWF play an important role in the life cycle of VWF. Galectin-3 is a carbohydrate-binding protein and has recently been identified as a new partner of VWF [12]. Furthermore, the affinity of transmembrane glycoproteins to the galectin-3 molecule is proportional to the number and branching of their N-glycans [39]. Therefore, we hypothesize that the biological activity of galectin-3 might also be directly regulated by the glycosylation of the molecule by the ABO blood group.
In agreement with previous studies, we confirmed that plasma VWF levels are ~$25\%$ higher in non-O blood groups. Additionally, we now show in two independent cohorts with different populations, that galectin-3 levels are significantly lower in non-O blood groups. Furthermore, we show that galectin-3 levels are lower in patients who had a heterozygous blood group. This inverse relationship between galectin-3 and VWF levels in different blood groups is an interesting phenomenon, potentially explained by the fact they are ligands of each other.
Numerous studies have assessed the prognostic value of galectin-3 in various cohorts [40,41,42]. We again corroborated these findings in the current study and herein confirm that galectin-3 is an independent predictor for all-cause mortality, particularly in subjects with non-O blood groups. The striking observation that galectin-3 has a strong prognostic value in non-O blood groups, although the group has lower galectin-3 values, should be explored in further detail. We speculate that the observed lower galectin-3 plasma values in the non-O blood group participants are caused by galectin-3 binding with blood group epitopes and that glycosylation might play a role in this.
In two different in vitro assays, we show a higher binding capacity of galectin-3 with RBCs and VWF in subjects with non-O blood groups, compared to blood group O. Binding preference of galectin-3 is most likely related to the extensive glycosylation of VWF, generating a clustered glycan surface, resembling the cell membrane [12]. These protein-glycan interactions between VWF and galectin-3 mainly consist of binding patterns with N-linked glycans rather than O-linked glycans, as has been shown previously [43]. Galectins regularly show a high affinity for glycans with longer poly-N-acetyllactosamine (poly-LacNAc) chains, given their higher binding capacity for N-linked glycans.
The higher hemagglutination activity in subjects with non-O blood groups is consistent with previous findings from erythrocyte binding and glycan microarray studies, suggesting that galectin-3 exhibits higher binding towards blood group A and B antigens compared to those bearing the H antigen [43,44,45,46]. While all galectins show a high affinity for β-galactosides, their recognition following terminal glycan modifications varies. The enhanced recognition of galectin-3 towards A and B blood group substitutions is potentially caused by unique subsides within the carbohydrate recognition domain (CRD) [43] and might play an evolutionary role. In fact, it enables the targeting of microbes that utilize blood group molecular mimicry [47]. Additionally, we hypothesize that stronger binding of galectin-3 with RBCs and VWF in non-O blood groups could explain lower levels of circulating galectin-3.
The prognostic value and absolute levels of biomarkers may differ between different subgroups in a study cohort, as previously observed for other biomarkers [48]. For instance, plasma levels differ between sexes, and also age, renal function, and the presence of diabetes are important determinants of hemoglobin level [49,50]. Even for the established cardiac marker NT-proBNP, important determinants exist leading to differences in circulating levels; renal failure tends to increase natriuretic peptide levels, whereas patients with obesity show lower levels of NT-proBNP [51,52]. Using a combination of biomarkers might improve risk prediction of clinical outcomes and, therefore, healthcare-related costs.
In conclusion, we postulate that the binding of galectin-3 to the A-, B-, and AB- blood group epitopes affects the circulating plasma levels and its biological activity, and thereby also its prognostic power for a given concentration. Future studies should provide more detailed data on this interaction and practical information on how to deal with this potential confounder.
## 4.1.1. LURIC
The Ludwigshafen Risk and Cardiovascular Health (LURIC) study consists of 3316 patients who were hospitalized for coronary angiography between 1997 and 2000. Indications for coronary angiography were chest pain or a positive non-invasive stress test suggestive of myocardial ischemia. Further methods and results have been described previously [53]. In total, galectin-3 values and blood group information were available for 2571 patients.
## 4.1.2. PREVEND
The Prevention of Renal and Vascular End-stage Disease (PREVEND) study is a prospective, observational, community-based study and was used to validate our findings [18,54]. The PREVEND study enrolled community-dwelling subjects during 1997–1998, and the study was designed to track the long-term development of cardiac, renal, and peripheral vascular disease. More details of the design of the study have been described previously [55,56]. Galectin-3 and blood group data were available in 3552 subjects.
In both studies, all participants provided informed consent, and the study procedures were conducted in accordance with the 1975 Declaration of Helsinki. The LURIC study was approved by the ethical committee of the Ärztekammer Rheinland-Pfalz, and the PREVEND study was approved by the ethical committee of the University Medical Center Groningen (UMCG).
## 4.2. Galectin-3 Measurements
In the LURIC study, galectin-3 levels were measured in plasma samples from the baseline. These samples were stored at −80 °C and were analysed using the ARCHITECT analyser (Abbott Diagnostics, Abbott Park, IL, USA). This automated assay uses the same antibodies and conjugates as in the manual assay and has a lower limit of detection of 1.01 ng/mL. Intra- and inter-assay variability are $3.2\%$ and $0.8\%$, respectively [57]. In the PREVEND study, blood was drawn at the baseline and anticoagulated with EDTA. Samples were stored at −80 °C until the time of analysis. Galectin-3 concentration was measured in plasma samples from the baseline using the BGM galectin-3 ELISA kit (BG Medicine Inc., Waltham, MA, USA). Intra- and inter-assay coefficients of this assay are $3.2\%$ and $5.6\%$, respectively. The assay has a lower limit of detection of 1.13 ng/mL and did not show cross-reactivity with collagens or other members of the galectin family [58].
## 4.3. Blood Group Determination
Blood group in LURIC was determined in the Haemostaseology Laboratory of the Ludwigshafen Cardiac Centre using a blood group antisera macroscopic agglutination assay (ABO- and Rh-blood group sera, Loxo GmbH, Dossenheim, Germany). In the PREVEND cohort, the ABO blood group was inferred from genotyping three single nucleotide polymorphisms (SNPs) on the ABO gene, namely rs8176719, rs8176746, and rs8176747. Using a combination of these SNPs, a blood group could be determined, as described previously [59].
## 4.4. Clinical Endpoints
In LURIC, mortality data were collected from local registries. Two independent and experienced clinicians, who were blinded for patient characteristics, reviewed information from death certificates, medical records from hospitals, and data from autopsies [20,60]. In PREVEND, mortality data were collected using the municipal register, and cause of death was obtained using the Prismant health care data system or Dutch Central Bureau of Statistics. Follow-up times ranged from the last follow-up or were censored on the date of the event or last contact, whatever occurred first.
## 4.5.1. Isolation of Red Blood Cells
Neonatal cord blood was obtained from healthy full-term pregnancies from donors from the obstetrics departments of the Martini Hospital Groningen and UMCG after informed consent was given. All donors were informed about the studies that were performed, as approved by the local Medical Ethical Committee of the UMCG. Furthermore, healthy volunteers from the research lab also provided blood specimens. Blood was collected in 10 mL EDTA tubes and 20 µL of blood was used to determine the ABO blood group using a Serafol ABO bedside test (Bio-Rad Laboratories BV, Veenendaal, the Netherlands). The remaining blood was centrifuged at 3500 rpm for 5 min. The buffy coat appeared as a dense white layer in the middle between the RBCs and plasma. Plasma and the buffy coat were removed from the tube. RBCs remained in the tube and were resuspended in PBS and again centrifuged at 2000 rpm for 5 min at 4 °C. This washing step was repeated 3 times. Subsequently, the remaining RBCs were diluted 12.5× in PBS-$3\%$ glutaraldehyde in a tube, and this was put on a rotating wheel for 1 h at room temperature. Afterwards, the cells were washed 5 times with PBS ($0.0025\%$ NaN3) and centrifuged at 2000 rpm for 2 min at 4 °C, and in the last step, cells were resuspended at 3–$4\%$ in PBS ($0.0025\%$ NaN3). Cells were stored at 4 °C for several days.
## 4.5.2. Hemagglutination Assay
RBCs were counted using a Fuchs-Rosenthal counting chamber. All cells were diluted to the lowest concentration of RBCs. We first calibrated our hemagglutination assay to determine the number of RBCs that were needed to show hemagglutination and to clearly distinguish between agglutinated and non-agglutinated cells. We tested 3 different concentrations of RBCs (5 µL/10 µL/15 µL of 2000 cells/µL) and 2 concentrations of galectin-3 (1 µM/2 µM). Following calibration, we used 15 µL RBCs/2 µM galectin-3 in the first well of a round-bottom, 96-well plate (Costar #3799, Corning Inc., Kennebunk, ME, USA). Next, 2 µM galectin-3 was serially diluted 1:1 into the next wells and 87,5 µL PBS was added to a total volume of 185 µL. Finally, 15 µL (2000 cells/µL) of RBCs were added to each well. The plate was incubated for 30 min at 4 °C and pictures were made using the ImageQuant LAS 4000 (GE Healthcare, Europe GmbH, Diegem, Belgium). Hemagglutination was assessed using ImageJ software (Version 1.50, National Institutes of Health, Bethesda, MD, USA), and the hemagglutination-index ((surface area of RBCs after incubation/surface area of the total well) × 100) (HA-index) was calculated.
## 4.5.3. Von Willebrand Factor ELISA
VWF was measured in human plasma using the VWF ELISA kit (Abcam, Cambridge, UK). This kit was designed for the quantitative measurement of human VWF in plasma, serum, and cell culture supernatants. Intra- and inter-assay coefficients of variation of this assay are $5\%$ and $7.1\%$, respectively. The lower level of detection is 2.5 mU/mL.
In LURIC, VWF was measured using the STA Liatest®VWF assay (Stago Diagnostica/Roche, Mannheim, Germany).
## 4.5.4. Galectin-3—von Willebrand Factor Binding Study
As previously described [12], an immunosorbent assay was performed in which a microtiter 96-well plate was coated with galectin-3 (5 µg/well) overnight at 4 °C. After washing 3 times with PBS ($0.1\%$ Tween-20) the plate was blocked for 2 h with PBS ($0.1\%$ Tween-20/$3\%$ BSA) at 37 °C. After washing 2 times with PBS ($0.1\%$ Tween-20), plasma of different blood groups was incubated in the wells for 1 h at 37 °C. After discarding the plasma, the plate was washed 2 times with PBS ($0.1\%$ Tween-20). Bound VWF was detected by adding 50 µL HRP-labelled polyclonal VWF antibody (1:1000; P0226, DAKO, Glostrup, Denmark). 50 µL 3,3′,5,5′-tetramethylbenzidine (TMB) was added to detect HRP activity, and after 10 min 50 µL of stop solution (H2SO4) was added to stop the reaction. The absorbance was measured using a microplate reader at a wavelength of 450 nm (BioTek Synergy H1, Winooski, VT, USA).
## 4.6. Statistical Analysis
Normally distributed variables are presented as means ± standard deviation (SD) or standard error of the mean (SEM). Non-normally distributed variables are expressed as medians [interquartile range (IQR)]. To compare normally distributed values across two groups, a two-sample t-test was performed, and to compare non-normally distributed values, we used the Wilcoxon rank-sum test. The comparison of categorical values was done using Pearson’s Chi-square test. Characteristics across four groups were compared using the ANOVA for continuous and normally distributed values and the Kruskal-Wallis test for continuous, non-normally distributed values. In a comparison of >1 group with a control group, we used the Kruskal-Wallis with a post hoc Dunn’s multiple comparisons tests.
Prior to analysis, galectin-3 was transformed logarithmically to obtain approximately normal distributions because of a skewed distribution as assessed by the Shapiro-Wilk test. To study the association of galectin-3 with all-cause mortality, Cox regression analysis and logistic regression analysis were performed with log-transformed galectin-3 as a continuous variable. The model was adjusted for age and sex and a multivariable model consisting of eGFR, smoking, systolic blood pressure, BMI, LDL-cholesterol, diabetes mellitus, lipid-lowering therapy, triglycerides, and CRP. This model is an established risk model for all-cause mortality in the LURIC study and has been used previously in other studies [20]. Results are stratified to blood group and summarized as hazard ratios, with $95\%$ confidence intervals (CI). For the interaction term, a p-value of <0.10 was considered to indicate statistical significance. For all other analyses, p-values <0.05 were considered to be statistically significant. Analyses were performed using STATA software version 14.2 and GraphPad Prism version 9.3.1 (GraphPad Software Inc., La Jolla, CA, USA). |
# The Effect of Different Physical Exercise Programs on Physical Fitness among Preschool Children: A Cluster-Randomized Controlled Trial
## Abstract
Background: Preschool children are in a period of rapid physical and psychological development, and improving their level of physical fitness is important for their health. To better develop the physical fitness of preschool children, it is very important to understand the behavioral attributes that promote the physical fitness of preschool children. This study aimed to determine the effectiveness of and the differences between different physical exercise programs in improving preschool children’s physical fitness. Methods: A total of 309 preschool children aged 4–5 years were recruited from 5 kindergartens to participate in the experiment. They were cluster-randomly allocated into five groups: basic movements (BM) group, rhythm activities (RA) group, ball games (BG) group, multiple activities (MA) group, and control (CG) group. The intervention groups received designed physical exercise programs with a duration of 30 min 3 times per week for 16 weeks. The CG group received unorganized physical activity (PA) with no interventions. The physical fitness of preschool children was measured using the PREFIT battery before and after the interventions. One-way analysis of variance, a nonparametric test; generalized linear models (GLM); and generalized linear mixed models (GLMM) were used to examine differences during the pre-experimental stage among groups and to assess the differential effects of the intervention conditions on all outcome indicators. The intervention condition models were adjusted for potential confounders (baseline test results, age, gender, height, weight, and body mass index) explaining the main outcome variance. Results: The final sample consisted of 253 participants (girls $46.3\%$) with an average age of 4.55 ± 0.28 years: the BG group ($$n = 55$$), the RA group ($$n = 52$$), the BM group ($$n = 45$$), the MA group ($$n = 44$$), and the CG group ($$n = 57$$). The results of the generalized linear mixed model and generalized linear model analyses indicated significant differences for all physical fitness tests between groups, except for the 20 m shuttle run test and the sit-and-reach test after the interventions. Grip strength was significantly higher in the BG and MA groups than in the BM group. The scores for standing long jump were significantly higher in the MA group than in the other groups. The scores for the 10 m shuttle run test were significantly lower in the BG and MA groups than in the CG, BM, and RA groups. The scores for skip jump were significantly lower in the BG and MA groups than in the RA group. The scores for balance beam were significantly lower in the BG and MA groups than in the RA group and significantly lower in the BG group than in the BM group. The scores for standing on one foot were significantly higher in the BG and MA groups than in the CG and RA groups and significantly higher in the BM group than in the CG group. Conclusions: Physical exercise programs designed for preschool physical education have positive effects on the physical fitness of preschool children. Compared with the exercise programs with a single project and action form, the comprehensive exercise programs with multiple action forms can better improve the physical fitness of preschool children.
## 1. Introduction
Physical fitness can be considered as the comprehensive performance of physical functions, such as muscular function, cardiovascular function, and metabolic function, effectively during daily physical activity (PA) or physical exercise [1]. Healthy levels of physical fitness guarantee that individuals participate in physical activity and work with vigor, and can promote resistance to fatigue [2]. Studies have indicated that a series of health problems in children are related to low levels of cardiorespiratory fitness and muscle strength, including skeletal dysplasia, cardiovascular metabolic diseases, and premature death in old age [3,4]. Additionally, physical fitness also plays an important role in the healthy life of preschool children, such as obesity prevention [5] and determining tibial bone mineral content, structure, and strength in 3–5-year-old children [6]. In addition to health concerns, physical fitness and intellectual maturity have been proven to be linked from an early age, even predicting intellectual maturity in 3–6-year-old children [7] and contributing to successful academic development in youth [8]. These findings highlight the need for promoting physical fitness among children and encouraging them to engage in regular physical activity.
Physical activity has been proven to be one of the important factors promoting physical fitness and is an essential factor of a healthy lifestyle [9,10]. Tan et al. [ 11] and Wick et al. [ 12] reported the advantages of physical activity programs over free play in improving the physical fitness of preschool children. The standardized physical activity lessons also exhibited significant advantages over the control group (unorganized physical activity) [13]. In addition, physical activity programs led by kindergartens and teachers have a positive effect on the physical fitness of preschool children [14]. A recent systematic review found that the physical exercise, whether on its own or combined with additional interventions, had beneficial effects on cardiorespiratory fitness, lower-body muscular strength, and speed agility in preschoolers [15]. The formulation of preschool education policy is inclined to using comprehensive exercise and encouraging kindergartens to build their own sports specialties, such as cheerleading, soccer, or basketball, to promote young children’s physical fitness [16]. In conclusion, physical activity plays a crucial role in promoting physical fitness in preschool children. The implementation of structured physical activity programs and the incorporation of exercise into preschool education policies can have a significant effect on the physical fitness and overall health of preschool children.
However, research has indicated that focusing on just one sport can lead to a series of problems in the growth and development of young athletes [17,18]. In addition, it has not been proven whether focusing on only one sport also can lead to problems in the growth and development of preschool children. Studies on the effect of different exercise plans on physical fitness have reported different results because of great differences in the quality and methods used. Moreover, the current evidence does not support a comparison of the effects of different exercise programs, which is not favorable to the selection of physical exercise programs for preschool children. In addition, most previous studies have employed professional coaches or physical educators as the implementers of intervention programs, which limits the generalizability of the findings to guiding physical education practices for preschool children. Studies on teacher-centered physical activity intervention have found no significant advantage over control groups in improving the physical fitness of preschool children [14]. To ensure positive physical fitness development in preschool children, it is important to understand the behavioral attributes and causative mechanisms that promote these outcomes [2].
On that basis, we designed a study to compare different physical exercise programs that have been proven to effectively improve the physical fitness of preschool children and are expected to respond to the evidence gap. Therefore, this study aimed to investigate the effectiveness and differences among these physical exercise programs in improving the physical fitness of preschool children.
## 2.1. Study Design and Participants
This study was a single-blind, cluster-RCT study, with the kindergarten class as the cluster for the intervention. The data were sourced from the Physical Exercise on Fundamental Movement Skills and Physical Fitness of preschoolers (PEFP) project [19]. The study population consisted of preschool children aged 4–5 years, who were physically capable of participating in sports and had obtained written consent from their parents or guardians. Participants with severe cognitive or motor impairments were accompanied by a support worker during physical exercise, but were not included in the data collection. Before the end of the interventions, the participants and teachers only participated in the physical exercise of the intervention groups and did not acquire the details of the intervention group allocation. The study was approved by the Ethics Committee of Shanghai Sport University and was registered under the ethical review number 102772019RT034.
In this study, a total of 309 preschool children aged 4 to 5 years were recruited from five kindergartens and cluster randomly assigned to 5 groups: basic movements (BM) group, rhythm activities (RA) group, ball games (BG) group, multiple activities (MA) group, and control (CG) group. The attendance rate of $30\%$ of the participants exceed $\frac{4}{5}$ of the total course, and all of the participants completed at least $\frac{2}{3}$ of the total course. After preschool children with missing pretest or posttest data were excluded, the final sample consisted of 253 participants (girls $46.3\%$) with an average age of 4.55 ± 0.28 years: the BG group ($$n = 55$$), the RA group ($$n = 52$$), the BM group ($$n = 45$$), the MA group ($$n = 44$$), and the CG group ($$n = 57$$). The flow diagram of the research process is shown in Figure 1.
## 2.2. Intervention Procedures
The present study comprised four intervention groups: the BM, RA, BG, and MA groups. Preschool children in the control group participated in unorganized PA, and the details of the interventions have been described elsewhere [19].
The intervention program consisted of structured lessons with a duration of 30 min performed three times a week for 16 weeks. Kindergarten teachers participated in the study and performed the physical exercise interventions after receiving 2 h of training at a local kindergarten. The structure of each lesson consisted of a warm-up period of 5 min, followed by a core exercise period of 20 min and a cool-down activity of 5 min. The study was performed in the winter, and precautions were taken to ensure the safety of the preschool children, such as starting with low-intensity physical activity (e.g., wrist rotations and leg swings), gradually increasing the intensity (e.g., arm rotations and knee-up walk to forceful swinging of arms and on-site running), and then slowly decreasing the intensity. To ensure comparability across the different programs, the core exercise period followed a consistent intensity control principle, whereby every 10 min of sports activities should include at least 5 min of moderate-to-high-intensity physical activity and 2 min of vigorous-intensity physical activity. The interventions were designed as games to increase the children’s interest, with the main differences being in the core exercise content. The interventions were performed within the existing physical activity plans of the kindergartens to avoid additional physical activity for the preschool children in the intervention groups. The intensity of PA was estimated by teachers on the basis of the active behavior of the preschool children and was determined using the “Compendium of Physical Activity” developed by Ainsworth et al. [ 20] and the Preschool-Age Children’s Physical Activity Questionnaire [21].
Preschool children in the control group participated in unorganized PA. The PA schedules were arranged by the kindergarten without the guidance of teachers, and the types and intensity of activities were determined by the preschool children.
## 2.3. Measurement Procedures
Physical fitness and descriptive data (e.g., age, sex, height, and weight) of preschool children were tested at baseline and at the end of the interventions, and each test was completed within a week. The physical fitness assessment was primarily based on the PREFIT battery, which has demonstrated satisfactory reliability and validity in evaluating the physical fitness of 4–6-year-old children [22].
The physical fitness of the preschool children was evaluated through a comprehensive test battery consisting of measures of cardiorespiratory fitness, musculoskeletal fitness, and motor fitness. The cardiorespiratory fitness of preschool children was assessed by testing the 20 m shuttle run. The musculoskeletal fitness of preschool children was assessed by testing grip strength and standing sit-and-reach. The motor fitness of preschool children was assessed by testing the 10 m shuttle run, balance beam walk, and standing on one foot and hoping. Additionally, anthropometric data, such as height and weight, were collected, and the body mass index (BMI) was calculated from these measurements. The standard testing procedures employed in this study have been described in detail elsewhere [19].
## 2.4. Statistical Analysis
The data were first tested for normality using standardized skewness and kurtosis values. Normally distributed data were presented as the mean and standard deviation, while non-normally distributed data were presented as the interquartile range. One-way analysis of variance (ANOVA) and the Kruskal–Wallis H test were used to examine differences during the pre-experimental stage among groups. The matched samples t-test and Wilcoxon rank-sum test were used to examine the differences of the physical fitness tests in groups before and after intervention. Generalized Linear models (GLMs) were used to assess the differential impacts of the intervention conditions on all outcome indicators for normally distributed data. Generalized Linear mixed models (GLMMs) were used to assess the differential effects of the intervention conditions on all outcome indicators for non-normally distributed data. The intervention condition (CG, BM, RA, BG, and MA) models were adjusted for potential confounders explaining main outcome variance (baseline test results, age, gender, height, weight, and BMI). Bonferroni adjusted pairwise comparisons were employed to analyze differences among conditions, and $p \leq 0.05$ indicated that the difference is statistically significant. All statistical analyses were performed using SPSS Statistics version 26.0 (IBM Corp, Chicago, IL, USA).
## 3.1. Participant Characteristics and Physical Fitness Test before Intervention
Table 1 presents participant characteristics and physical fitness tests during the pre-intervention stage. There were significant differences among the groups before the interventions with regard to the balance beam, grip, and 20 m shuttle run test ($p \leq 0.05$). The scores for balance beam in the BG group were significantly higher than in the other groups ($p \leq 0.05$). The grip strength of the CG and BG groups was significantly higher than that of the BM and RA groups ($p \leq 0.05$). The grip strength of the CG group was significantly higher than that of the MA group ($p \leq 0.05$). The scores for the 20 m shuttle run test in the CG group were significantly higher in the BM and RA groups ($p \leq 0.05$). The remaining indexes revealed no significant differences among the different groups (Table 1). On the basis of previous literature and results, age, gender, height, weight, and BMI were included as covariates in the subsequent analyses.
## 3.2. Physical Fitness Changes after Intervention
Table 2 presents the results of a matched samples t-test and Wilcoxon rank-sum test for the differences between the physical fitness tests before and after the interventions. The pre-post effect sizes exhibited a significant decrease in the sit-and-reach test in all groups after the interventions ($p \leq 0.01$). In the CG group, the 10 m shuttle run performance of preschool children decreased significantly after the experiment ($$p \leq 0.009$$). There were significant improvements in the 20 m shuttle run test ($$p \leq 0.001$$), grip ($$p \leq 0.000$$), standing on one foot ($$p \leq 0.027$$), and skip jump ($$p \leq 0.009$$) following the interventions in the BM group. The RA group had significant improvements in the 20 m shuttle run test ($$p \leq 0.042$$), grip ($$p \leq 0.000$$), and 10 m shuttle run test ($$p \leq 0.008$$) after the interventions. The BG group had significant improvements in grip ($$p \leq 0.000$$), standing on one foot ($$p \leq 0.009$$), 10 m shuttle run test ($$p \leq 0.000$$), and skip jump ($$p \leq 0.002$$) after the interventions. There was a significant improvement in grip ($$p \leq 0.000$$), standing long jump ($$p \leq 0.000$$), standing on one foot ($$p \leq 0.001$$), 10 m shuttle run test ($$p \leq 0.000$$), skip jump ($$p \leq 0.014$$), and balance beam ($$p \leq 0.047$$) following the interventions in the MA group. The remaining indexes revealed no significant differences before and after the interventions.
Figure 2 presents the results of the generalized linear mixed-model analyses and generalized linear models for each of the physical fitness tests after the interventions. Grip strength was significantly higher in the BG and MA groups than in the BM group ($p \leq 0.05$), indicating that the BG and MA groups had a significantly better improvement in the grip strength of preschool children than the BM group. The scores for standing long jump were significantly higher in the MA group than in the other groups ($p \leq 0.05$), indicating that the MA group had a significantly better improvement in the standing long jump of preschool children than the other groups. The scores for the 10 m shuttle run test were significantly lower in the BG and MA groups than in the CG, BM, and RA groups ($p \leq 0.05$), indicating that the BG and MA groups had a significantly better improvement in the 10 m shuttle run test of preschool children than the CG, BM, and RA groups. The scores for standing on one foot were significantly higher in the BG and MA groups than in the CG and RA groups ($p \leq 0.05$) and significantly higher in the BM group than in the CG group ($p \leq 0.05$), indicating that the BG and MA groups had a significantly better improvement in the standing on one foot of preschool children than the CG and RA groups. The scores for skip jump were significantly lower in the BG and MA groups than in the RA group ($p \leq 0.05$), indicating that the BG and MA groups had a significantly better improvement in the skip jump of preschool children than the RA group. The scores for balance beam were significantly lower in the BG and MA groups than in the RA group ($p \leq 0.05$) and significantly lower in the BG group than in the BM group ($p \leq 0.05$), indicating that the BG and MA groups had a significantly better improvement in the balance beam of preschool children than the RA group. However, the scores for the 20 m shuttle run test and the sit-and-reach test revealed no significant differences among the different groups.
## 4. Discussion
Preschool children undergo a period of rapid physical growth and maturation of the nervous system, requiring the development of corresponding physical fitness, such as agility, strength, and reaction speed [23,24]. Evidence from systematic reviews focuses on the strong association between cardiorespiratory fitness and musculoskeletal fitness and the development of motor competence throughout early years, childhood, and adolescence, with increasing strength with age [2,25]. Based on this evidence, it is rational to believe that the importance of physical fitness in preschool children should be the same as that of older children [26]. The present study aimed to identify more effective physical exercise programs to improve the physical fitness of preschool children and provide evidence for the implementation of preschool physical education. Following these 16-week interventions in preschool, children exhibited improvements in all physical fitness tests after intervention for all intervention groups, except for the sit-and-reach test, and the balance beam test in the RA group. In the CG group, the preschool children showed no significant increase in all physical fitness indicators of preschool children. The BG and MA groups had a certain advantage over the BM, RA, and CG groups in improving the physical fitness of preschool children.
In terms of cardiorespiratory fitness, pre-post effect sizes exhibited significant improvements in the 20 m shuttle run test in the BM and RA groups, which is consistent with previous studies [15]. However, the improvement in the 20 m shuttle run test before and after the interventions in the BG and MA groups was not as pronounced and not statistically significant. This may be because the baseline cardiorespiratory fitness levels of the preschool children in the BM and RA groups were lower than those of the children in the BG and MA groups. Previous research has indicated that the baseline level of physical fitness in preschool children can affect the effect size of interventions, with higher baseline scores leading to smaller improvements and lower baseline scores leading to larger changes [27,28]. In addition, after the baseline test value and other confounding factors were adjusted, the BG and MA groups demonstrated an advantage in terms of improving cardiorespiratory fitness when compared with the BM and RA groups. Systematic review and meta-analysis results from recent studies have indicated that all types of physical activity programs, including free play, can improve the cardiorespiratory fitness of preschool children to a certain extent [15,29], which is consistent with the findings of this study.
The muscle strength (grip and standing long jump) of preschool children in all intervention groups, including the control group, obviously improved after the interventions, which is consistent with previous research findings [15,27]. The BG and MA groups demonstrated advantages over the BM and RA groups in terms of grip strength improvement, whereas the MA group demonstrated significant improvements in standing long jump performance when compared with the other groups. However, the flexibility (sit-and-reach) of preschool children in all intervention groups and the control group decreased significantly, which contrasts with previous findings [15,27]. Long-term studies have indicated that preschool children’s physical fitness will gradually increase with age [30,31], except for flexibility, which may exhibit little change or even decrease without targeted practice [27,32]. In addition, the decline of the preschool children’s flexibility may be affected by the season (children’s clothing and temperature). The baseline test was in the autumn, when clothes and temperature had little effect on children’s motor performance. The interventions ended in the winter, when cold temperatures and heavy clothes have a great effects on children’s motor performance [33]. It is known that possessing adequate flexibility, range of motion, and muscle strength can mitigate the risk of injury in sports or everyday activities, particularly in later life, when the negative effect of decreased flexibility on health cannot be disregarded [34]. Therefore, the flexibility exercise of preschool children should be an important part of the physical exercise program formulation. The results of this study suggest that the MA intervention exhibited advantages in improving the muscle strength of preschool children when compared with other physical exercise programs and the control group. However, further research is warranted to better understand the effect of various physical exercise programs on the flexibility of preschool children.
The motor fitness of preschoolers was obviously improved in all intervention groups after the 16-week interventions, and the intervention groups had certain advantages over the CG group. Previous research, and systematic review and meta-analysis also, indicated that the designed physical activity programs had a positive effect on the motor fitness of preschool children [11,15,29], similar to the results of this study. The BG and MA groups displayed obvious advantages in improving the motor fitness of preschool children when compared with the BM, RA, and CG groups. This may be caused by a close relationship between the performance of children’s motor fitness and the level of motor skills [2]. A study on the effect of different exercise programs on the motor skills of preschool children has indicated that multilateral exercise has certain advantages over specific programs of rhythmic gymnastics and soccer [18]. The physical exercise of the MA group may better improve the motor performance of preschool children by better improving their motor skills. In this study, ball games have similar intervention effects on preschoolers’ motor fitness with multiple activities. In addition, research has indicated that the motor fitness of preschool children was significantly improved with small improvements in cardiovascular fitness. The BG and MA groups exhibited advantages in improving cardiorespiratory fitness when compared with the BM, RA, and CG groups. This may help explain why BG and MA can better improve the motor fitness of preschool children.
In summary, the MA group had advantages over the BM, RA, and CG groups in terms of the improvement of the physical fitness of preschool children. In addition, in this study, the BM and RA groups had no advantages over the CG group with regard to improvement of cardiorespiratory fitness, musculoskeletal fitness, and motor fitness. These results are similar to those of another study that found that teacher-centered intervention granted preschool children no advantage over the control group in terms of motor fitness [14]. There is an evidence gap with regard to the effect of different physical exercise programs on the physical fitness of preschool children, and there were no similar results for reference to verify whether the results of this study are reasonable. However, relevant studies have indicated that early specialized sports training or focusing on the development of just one sport may lead to a series of growth and development problems, such as physical and physiological imbalance, unilateral muscle development, risk of injury, coordinated development disorder, and limitations on differentiated skill acquisition, and even a negative effect on mental health, and can also reduce children’s enthusiasm for PA participation [17,35]. In addition, studies have indicated that the diversified sports activity module and the structured multisport program have significant advantages over free play or conventional sports activity in improving the physical fitness of preschool children [27,36]. According to the research of Stodden et al. [ 37] and Lubans et al. [ 38], PA, physical fitness, and motor skills all reinforce each other, and multilateral exercise has certain advantages over the single exercise mode in improving the motor skills of preschool children [17]. This evidence can help explain why multiple activity programs can better improve the physical fitness of preschool children.
There are several limitations that need to be addressed in this study. The first is in terms of sample representation; because of the scale and difficulty of the experiment, only 4–5-year-old preschoolers were included in this study. Therefore, the results of this study may not be applicable to all preschool children. Second, the baseline level of physical fitness in the experimental groups was not balanced. The improvement after the interventions will be greater if the baseline test level is low. In addition, the physical environments of the baseline test and post-intervention test were relatively different. Therefore, the significance of analyzing the improvement of physical fitness before and after the interventions is limited. However, we used a mixed-effects model to adjust the effect of the baseline test results, gender, and other factors in the intervention effect. In addition, all kindergartens participating in the experiment were in the same community, and the test environment was similar. Finally, the number of preschool children in each group included in the analysis was not balanced, but the minimum sample size that meets the statistical analysis was 30 children per group [19].
## 5. Conclusions
Physical exercise programs designed for preschool physical education have positive effects on the physical fitness of preschool children. Compared with the exercise programs with a single project and action form, comprehensive exercise programs with multiple action forms can better improve the physical fitness of preschool children. |
# Longitudinal Study on the Effect of Onboard Service on Seafarers’ Health Statuses
## Abstract
Seafaring is considered one of the most stressful professions. Stressors in seafaring lead to typical symptoms of stress, such as insomnia, loss of concentration, anxiety, lower tolerance of frustration, changes in eating habits, psychosomatic symptoms and diseases, and overall reduced productivity, with the possibility of burnout and chronic responsibility syndrome. It has been previously determined that seafarers belong to high-risk occupations in terms of developing metabolic syndrome, and according to their BMIs, almost $50\%$ of all seafarers belong to the overweight and obesity categories. This is the first longitudinal study conducted with the aim of using the BIA method to determine the anthropometrical changes that occur during several weeks of continuous onboard service. This study included an observed group consisting of 63 professional seafarers with 8 to 12 weeks of continuous onboard service and a control group of 36 respondents from unrelated occupations. It was determined that Croatian seafarers fit into the current world trends regarding overweight and obesity among the seafaring population, with the following percentages in the BMI categories: underweight, $0\%$; normal weight, $42.86\%$; overweight, $39.68\%$; and obesity, $17.46\%$. It was established that the anthropometric statuses of the seafarers significantly changed during several weeks of continuous onboard service. Seafarers who served on board for 11 weeks lost 0.41 kg of muscle mass, whereas their total fat mass increased by 1.93 kg. Changes in anthropometric parameters could indicate deterioration of seafarers’ health statuses.
## 1. Introduction
The atypicality and specificities of work and family life, i.e., social life, are the main characteristics and differences of seafarers’ lives in comparison to those of the rest of the working population [1]. The variety and speed of environmental changes and exposure to continuous noise and vibration entailed make it hard to maintain psychophysical homeostasis, not to mention other stressors that arise from the specifics of the maritime profession, which are still insufficiently taken into account [2]. At least one environmental factor, such as excessively cold or warm ambient temperatures, odors, noise, poor bedding conditions, or ambient light during sleeping in cabins disturbs $91.6\%$ of seafarers [3].
Stressors in seafaring lead to typical symptoms of stress, such as insomnia, loss of concentration, anxiety, lower tolerance of frustration, changes in eating habits, psychosomatic symptoms and diseases, and overall reduced productivity, with the possibility of burnout and chronic responsibility syndrome [4].
With circadian work rhythms such as the 6:6 and 4:8 shift systems, body recovery and sleep are interrupted and often insufficient. During night work on board in the 6:6 (midnight to 6:00 a.m.) and 4:8 (midnight to 4:00 a.m.) systems, seafarers experience increased sleepiness with shorter sleep episodes [4].
Metabolic health, cancer risk, cardiovascular health, and mental health are further compromised by shift work, especially night work. This is due to the problems caused by the shift-work lifestyle, which are mainly manifested in chronic sleep deprivation, sympathovagal and hormonal imbalance, inflammation, impaired glucose metabolism, and unregulated cell cycles. As a result, such long-term conditions lead to a number of health disorders such as obesity, metabolic syndrome, type II diabetes, gastrointestinal dysfunction, impaired immune function, cardiovascular disease, excessive sleepiness, mood and social disorders, and increased risk of cancer [5].
Compared with that of other transportation sectors, fatigue in the maritime sector has been much less researched. Fixed and rotating work schedules, along with cultural and commercial pressures, directly affect seafarers’ physical and mental health [4,6].
With knowledge that during their service on board, seafarers have a limited influence on quality and quantity of food [7], and that nutritional problems are even more pronounced in multiethnic crews with different eating habits, it is clear that the physical and psychological conditions of seafarers may imperceptibly deteriorate [4].
An individual’s ability to adequately cope with the demands of such a maritime occupation depends on that individual’s state of physical and mental health. An extremely demanding maritime occupation, which limits a person in maintaining the usual way of life on land in terms of food choices, regular sleep, and often the inability to exercise, can lead to a gradual loss of physical and mental fitness, which can ultimately lead to human error, illness, and disabilities related to seafarers’ work [8].
A diet that does not include enough fresh fruits and vegetables can contribute to fatigue and has an overall negative impact on seafarers’ health [9,10]. In addition, the circadian rhythm of work affects digestion, which is most productive during the day and much less so at night, even when a person is awake and in a working rhythm [11]. Gastrointestinal disorders are very common in people who eat outside of traditional mealtimes and tend to worsen with consumption of tea, coffee, alcohol, nicotine, and some medications and supplements. Night workers are five times more likely to contract peptic ulcers than are day workers [12].
Exercise and good physical fitness have beneficial effects on the body and psyche, help in coping with stress, and can help reduce a person’s susceptibility to certain diseases and infections [13]. Some of the anthropometric methods for assessing a person’s health status are analysis of body composition and evaluation of body structure. The most-used methods are bioelectrical impedance (BIA) and the body mass index (BMI) [14]. The BMI is widely accepted and used as a standard test, and BIA is a valid and precise method for determining the body compositions of normal, healthy people [15] and athletes [16]. Due to fast and noninvasive measurement, BIA is widely used within the athlete population, but it has never been used in the population of professional seafarers.
Therefore, the aim of this study was to determine the body compositions of Croatian seafarers and investigate changes in anthropometric parameters during continuous onboard service.
## 2.1. Subject and Variable Sample
The subject sample included 99 adults from Croatia (Caucasian), divided into a control group and an experimental group. The control group included 36 subjects with a mean chronological age of 33.56 ± 8.49 years, a mean body height of 183.22 ± 5.58 cm, and a mean body mass of 93.15 ± 15.36 kg. The sample in the control group was a convenience sample selected to resemble the experimental group in the initial testing. Furthermore, the test subjects in the control group were selected on the condition that they did not perform jobs that may be similar to those of the seafarers, or that involve long-term absence from home (e.g., drivers, pilots, soldiers, coaches, athletes, etc.). The experimental group included 63 subjects with a mean chronological age of 35.00 ± 8.08 years, a mean body height of 183.73 ± 5.94 cm, and a mean body mass of 89.43 ± 10.82 kg. The subject sample included professional seafarers who serve on merchant ships as officers aboard various types of ships and for various companies. To make the experimental group as homogenous as possible, subjects whose service aboard was shorter than 8 weeks or longer than 12 weeks were excluded from the sample. This period did not include the “idle” time between testing and departure, i.e., return from the ship. All subjects were measured on two occasions, the initial and final measurements, performed during morning hours. The subjects in the experimental group (professional seafarers) were measured within seven days before departure and within seven days after returning home. Subjects in the control group were measured with random selection in the final testing, 8 to 12 weeks after the initial testing.
Two anthropometric variables were measured, body height and body mass, which were then used to calculate the body mass index. All measurements were taken according to the International Society for the Advancement of Kinanthropometry—ISAK protocol [17]. Furthermore, the subjects were measured with a Tanita BC-418 (Tanita Corp., Tokyo, Japan) device following the recommendations of Kyle et al. [ 18], and the results of the following anthropometric measures were determined using the bioelectrical impedance method: the body fat percentage, fat mass, visceral fat, metabolic age, fat-free mass, total body water, extracellular water, intracellular water, muscle mass, the skeletal muscle index, bone mass, and the basal metabolic rate.
## 2.2. Description of Body Composition Measures
Body composition measures (the body fat percentage, fat mass, visceral fat, metabolic age, fat-free mass, total body water, extracellular water, intracellular water, muscle mass, the skeletal muscle index, bone mass, and the basal metabolic rate) were determined with the bioelectrical impedance method, using a Tanita BC-418 device (Tanita Corp., Tokyo, Japan). The subjects were measured barefoot and in dry underwear. The “body type” setting was set to “normal” for all subjects, whereas the “weight of clothes” was set to 0.0 kg.
Body Fat Percentage—the proportion of fat to the total body weight.
Fat Mass—the actual weight of the fat in the body.
Visceral fat—fat located deep in the core abdominal area, surrounding and protecting the vital organs.
Muscle Mass—the predicted weight of muscle in the body.
Total Body Water—the total amount of fluid in the body, expressed as a percentage of the total weight.
Extracellular Water—body fluid found outside of cells.
Intracellular Water—fluid found inside cells.
Bone Mass—the predicted weight of bone mineral in the body.
Basal Metabolic Rate—the daily minimum level of energy or calories the body requires when at rest (including sleeping) in order to function effectively.
Metabolic Age—a comparison of the basal metabolic rate (BMR) to the BMR average of a chronological age group. If the metabolic age is higher than the actual age, it is an indication that improving the metabolic rate is needed.
Skeletal Muscle Index—the ratio of the muscle in the arms and legs to height.
## 2.3. Methods of Data Analysis
For all the measured variables and for each subject sample separately, the following descriptive parameters were calculated: arithmetic mean (AM); standard deviation (SD); median (M), minimum (MIN) and maximum (MAX) results; and the coefficients of asymmetry (SKEW) and peakedness (KURT) of result distribution. Normality of distribution was tested with the Kolmogorov–Smirnov test (KS). The differences in initial testing in chronological age, anthropometric characteristics, and body composition measures between the control and experimental groups were determined using the independent samples t-test. The differences between the initial and final measurements of chronological age, anthropometric characteristics, and body composition measures between the control and experimental groups were determined using the t-test for dependent samples.
For each measured variable, the differences between the initial and final tests of the control and experimental groups were calculated and arithmetic means were determined. The differences between the initial and final tests of chronological age, anthropometric characteristics, and body composition measures in the control and experimental groups were determined using the independent samples t-test. The data were analyzed using Statistica Ver 11.0 (SoftStat, SAD, Tulsa, OK, USA).
## 3. Results
Table 1 presents the results of the Kolmogorov–Smirnov test of anthropometric variables indicate that no variable exceeded the cutoff value of the test, which was 0.23 for the observed sample. This indicates that there were no significant deviations of the variables from normal distribution, and all variables were suitable for further parametric statistical analysis.
Table 2 presents the results of the Kolmogorov–Smirnov test of anthropometric variables indicate that no variable exceeded the cutoff value of the test, which was 0.17 for the observed sample. This indicates that there were no significant deviations of the variables from normal distribution, and all variables were suitable for further parametric statistical analysis.
Table 3 presents that in the initial t-test measurement, no significant differences were found between the control and experimental groups in the arithmetic mean scores of the measured variables.
Table 4 presents that the t-test revealed significant differences between the initial and final measurements in the experimental group for the following variables: age, weight, the body mass index, the fat percentage, fat mass, visceral fat, metabolic age, fat-free mass, total body water, intracellular water, and muscle mass.
Table 5 presents that the t-test revealed significant differences between the control and experimental groups in the changes in the values of the measured variables between the initial and final measurements in the following variables: weight, the body mass index, fat percentage, fat mass, visceral fat, and metabolic age.
## 4. Discussion
The BMI was the most frequently measured/analyzed anthropometric variable in previous research on a sample of professional seafarers [19,20,21,22,23,24,25]. In this study, the following proportions of professional seafarers regarding the BMI categories to which they belong were determined: underweight, $0\%$; normal weight, $42.86\%$; overweight, $39.68\%$; and obesity, $17.46\%$. We should compare the obtained results with those of other authors with great caution because the BMI depends, among other things, on the cultural and ethnic characteristics of the population [26]. In a sample of 1155 subjects, Nittari [24] found an average BMI of 25.7 kg/m2, and the proportions were very similar to those found in this study: underweight, $0.8\%$; normal weight, $47.20\%$; overweight, $40.80\%$; and obesity, $11.20\%$. Similar results were found in a study conducted by Gamo Sagaro in 2021 [25], in which the mean BMI was 25.55 kg/m2 and the following percentages were determined in the BMI categories: underweight, $0\%$; normal weight, $51.90\%$; overweight, $39.30\%$; and obesity, $8.50\%$. The comparison with the 2021 study is even more significant because the average age of the subjects ($$n = 603$$) was 37.37 years: very similar to the sample in this study. The higher proportion of obesity and higher mean BMI values in these seafarers compared to the Nittari research can be explained with the fact that $51\%$ of subjects in that study were Filipinos and Indians, who by default have a lower tendency to be overweight and obese [27]. Results almost identical to the results of this study were determined in Hoeyer’s 2005 [19] study on seafarers aged 25–44 years ($$n = 613$$): underweight, $2.8\%$; normal weight, $40.0\%$; overweight, $38.8\%$; and obesity, $18.4\%$.
A higher proportion of overweight and obese seafarers compared to the observed sample was determined in a study conducted by Hansen in 2011 [20], which, among other things, indicated an increase in the frequency of overweight among seafarers. We can conclude that Croatian seafarers fit into the current world trends regarding overweight and obesity among the seafaring population, which is defined as one of the main health problems of today. However, in comparison of the BMIs of Croatian seafarers with WHO data for the *Croatian* general population, Croatian seafarers have lower mean BMI values and thus a lower proportion of overweight and obesity. The control group also had lower mean BMI values than the general population, according to the WHO [27].
BIA is a very fast, simple, and reliable method for body composition analysis [28,29,30,31]. Although BIA measurement is widely used among top athletes [32,33], it has not been used in the population of professional seafarers until now. Moreover, it has not even been used in the population of drivers, who, along with seafarers, belong to the group of highest-risk workers [34]. The observed sample of seafarers and the control group had lower %BF values than did maritime university students [22], even though the subjects in this study were significantly older and the percentage of fat tissue has a tendency to increase with age [35]. This can also be explained with the fact that the studies did not use body-composition-analysis instruments from the same manufacturer. In a study on a sample of professional firefighters, [36] the same analysis equipment was used as in this study, and the results indicated similar %BF values as in seafarers of the same age, i.e., slightly higher %BF values in older firefighters, as expected. In addition to determining anthropometrical characteristics of seafarers, this study aimed to analyze changes in body compositions of seafarers during service on board. To the authors’ knowledge, this is the first longitudinal study on the population of professional seafarers.
To ensure an unambiguous interpretation of results, this study also included a control group of subjects, which did not significantly differ statistically from the experimental group. During 10.97 weeks of onboard service, the total body mass of the professional seafarers increased by 1.50 kg. Although the change in total body mass compared to that of the control group was significant, it should not be a cause for concern in real life. However, analysis of body composition revealed fundamental problems that, at first glance, remained hidden in the relatively small change in total body mass. During their service on board, the seafarers on average, lost 0.41 kg of their total muscle mass, whereas their total fat masses increased by 1.93 kg. Of course, this “negative” transformation was also reflected in other indicators of body composition. Thus, an increase of 1.81 percentage points in the percentage of body fat and an increase of 0.73 in the visceral fat rating were determined.
Average, muscle mass loss of 0.41 kg and total body fat increase of 1.93 kg was recorded among the sample of subjects who served on board for 11 weeks. An increased proportion of fat mass in the body structure results in risk of metabolic syndrome, which is characterized with visceral obesity associated with insulin resistance, arterial hypertension, dyslipidemia, diabetes, and glucose intolerance. Possible causes of these rapid anthropometric changes are physical inactivity on board and circadian rhythm disorders with sleep disorders. Lack of sleep and circadian sleep disorders are symptoms of many conditions. Jepsen concluded that lack of sleep is associated with obesity [37], and it is debated whether circadian sleep disorders are the causes or the consequences of some neurodegenerative diseases [38,39].
Body composition is a much better indicator of the degree of nutritional status than the body mass index is because obesity is not defined as increased body mass but as an increased proportion of adipose tissue in body mass. Among the study subjects, an average increase of 1.81 percentage points % in the percentage of body fat and an average increase of 0.73 in the visceral fat rating was determined.
## 5. Conclusions
In this paper, anthropometrical characteristics of professional seafarers, which can certainly be a point of reference for future research, were determined. Furthermore, this is one of the rare studies in which the problem of the influence of onboard service on professional seafarers’ health was approached through a longitudinal study. It was established that the anthropometric statuses of the seafarers significantly changed during several weeks of continuous onboard service. These changes in anthropometric parameters could indicate deterioration of seafarers’ health statuses. We can only speculate about the causes of those anthropometric changes in a relatively short interval. The main shortcomings of this study are reflected in the fact that no external factors were measured. Therefore, it is recommended for future studies to include tests and methods aimed at detecting possible negative factors such as diet, sleep, psychological stress, etc. It is also recommended to repeat this research on seafarers who perform other types of work on merchant fleets (engineers, auxiliary staff, etc.), seafarers of other maritime occupations (fishermen, skippers, etc.), and other professionals who must leave home for a long time to perform their jobs (pilots, drivers, seasonal workers, etc.). |
# Normative Values and Psychometric Properties of EQ-5D-Y-3L in Chilean Youth Population among Different Weight Statuses
## Abstract
Background: This study aimed to provide population norms among children and adolescents in Chile using the EQ-5D-Y-3L questionnaire and to examine its feasibility and validity among body weight statuses. Methods: This was a cross-sectional study in which 2204 children and adolescents (aged 8–18 years) from Chile completed a set of questionnaires providing sociodemographic, anthropometric and health-related quality of life (HRQoL) data using the five EQ-5D-Y-3L dimensions and its visual analogue scale (EQ-VAS). Descriptive statistics of the five dimensions and the EQ-VAS were categorized into body weight status groups for the EQ-5D-Y-3L population norms. The ceiling effect, feasibility and discriminant/convergent validity of the EQ-5D-Y-3L were tested. Results: The dimensions of the EQ-5D-Y-3L questionnaire presented more ceiling effects than the EQ-VAS. The validity showed that the EQ-VAS could discriminate among body weight statuses. However, the EQ-5D-Y-3L index (EQ-Index) demonstrated a non-acceptable discriminant validity. Furthermore, both the EQ-Index and the EQ-VAS presented an acceptable concurrent validity among weight statuses. Conclusions: The normative values of the EQ-5D-Y-3L indicated its potential use as a reference for future studies. However, the validity of the EQ-5D-Y-3L for comparing the HRQoL among weight statuses could be insufficient.
## 1. Introduction
Health-related quality of life (HRQoL) has been defined as a multidimensional concept beyond somatic indicators, including physical, psychological, social and functional aspects of self-assessment of the individual’s health [1]. The increases in chronic illness in children and adolescents [2] have framed HRQoL assessment as of significant interest to public health. This fact was indicated by the US Food and Drug Administration and the pharmaceutical industry, who recognize the need for assessing HRQoL in pediatric and adolescent patients to determine the effects of pharmacological treatments to complete the biomedical perspective [3].
HRQoL is measured via self-report or proxy report from a standardized questionnaire that includes different dimensions. The questionnaire provides a generic health perception allowing comparisons between different populations and conditions and also an econometric result that could be used in cost–utility analysis for economic evaluation [4].
The main HRQoL questionnaires for children and adolescents, including the PedsQL [5], Kidscreen [6], and EQ-5D-Y-3L [7], have culturally adapted their versions for most countries.
The EQ-5D-Y-3L is a widely used questionnaire with five dimensions of health (“mobility,” “looking after myself,” “doing usual activities,” “having pain or discomfort,” and “feeling worried, sad or unhappy”) and three levels of response indicating the severity of health problems in the participant, providing 243 possible health states [8]. The EQ-5D-Y-3L has been translated and adapted to Spanish and presents acceptable validity and reliability [7]. This questionnaire is also used in Latin American countries [9], but to our knowledge, there are scant normative data for this region.
Within the broad spectrum of childhood diseases, obesity takes up a prominent position due to its prevalence and effects on physical and psychological health [10,11]. One of the principal components of chronic illness in children and adolescents living in Latin American countries is overweight and obesity, which has grown continuously in the last decade [12]. In this respect, previous studies have shown an inverse relationship between body mass index (BMI) and HRQoL. For example, Perez-Sousa et al. [ 13] found that overweight and obese Spanish children showed a lower HRQoL than their normal-weight counterparts. Garcia-Rubio et al. [ 14] showed that overweight and obese children and adolescents had a reduced HRQoL compared to healthy children in a cross-sectional study carried out in Chile. However, several studies have presented a muddled relationship between excess body weight and HRQoL. For example, Petersen et al. [ 15] found a similar HRQoL in children with obesity and normal weight, and Liu et al. [ 16] only found a lower HRQoL for the social dimension in overweight/obese children compared with healthy-weight children after controlling for gender, age, school type, parental education and family income. *In* general terms, the studies emphasize that the lack of differences found may be due to cultural and/or socioeconomic characteristics. However, we hypothesize that the questionnaire cannot discern different health perceptions between weight status due to a lack of knowledge on performance regarding psychometric properties of the questionnaire in these subgroups.
Population norms are essential to characterize the study population, interpret research results, and compare studies. Furthermore, this action allows comparison of results from the general population or people with specific health characteristics in order to develop primary physician care standards [17]. However, Chile lacks studies on normative values of HRQoL in children and adolescents from general and specific populations using the EQ-5D-Y-3L questionnaire. Thus, based on the current evidence and the importance of screening for HRQoL within children and adolescents, we aimed to provide normative population values for HRQoL and examine the feasibility and convergent/discriminant validity among Chilean children and adolescents with different weight status using the EQ-5D-Y-3L.
## 2.1. Study Design and Participants
A cross-sectional analysis was conducted using data collected from 2204 Chilean children and adolescents aged 8–18 years from the general population. We recruited 3150 participants from primary and secondary schools in Chile and 2204 of these finally agreed to participate in the interviews. We requested the participation of eight schools (four primary and four secondary), with each providing access to four or five sections of different grades. According to the design, participants who met the following inclusion criteria formed our target group: children and adolescents aged 8–18 years; knowledge of the Spanish language; present on the day of the test; and gave their informed consent (subjects and parents or legal tutors).
Before data collection, the parents were informed of the methodology and objectives of the study via an official letter written by the researchers that included an informed consent form. The study was approved by University of Santiago Ethics Committee (code 938).
## 2.2. Procedure
The data were collected by two experienced research group members using direct administration in small groups (10–12 children per group). The survey duration varied from 5 min for children aged 8–12 years to 3 min for students aged 13–18 years. Each respondent was assigned a code for confidentiality and to facilitate data analysis. A phone number and email address were provided to respondents to address any concerns that may arise at any time.
## 2.3.1. Sociodemographic Information
A core set of questions on essential sociodemographic characteristics (age, gender and year of schooling) and HRQoL and subjective health measures were included. For anthropometric data, weight and height were assessed with the participants standing barefoot in minimal clothing. The instrument used was a Seca 769 (Seca, Hamburg, Germany) scale with a portable Seca 220 stadiometer (accuracy of 0.1 cm; Seca, Hamburg, Germany) placed on a rigid wall. BMI was calculated as the body weight divided by the squared height (kg/m2). Individuals were classified into four categories according to their BMI as follows: [0] underweight, [1] normal weight, [3] overweight and [4] obese, as indicated by Cole et al. [ 18].
## 2.3.2. Health-Related Quality of Life
The EuroQol group developed a tool with five dimensions (the EQ-5D) to quantify HRQoL. The dimensions are mobility, self-care, usual activities, pain or discomfort and anxiety or depression. The instrument also includes a visual analogue scale (EQ-VAS), which is anchored at 100 (best imaginable health) and 0 (worst imaginable health). Most recently, the EuroQol group implemented a version for children and adolescents between the ages of 8 and 18 years, called the EQ-5D-Y-3L [7]. The five questions are whether children have problems with walking, looking after themselves, doing their usual activities, have pain or discomfort and feel worried, sad or unhappy, to which they could respond with “no problems,” “some problems” and “a lot of problems.” The EQ-5D-Y-3L offers a state of health that can be converted into a unique index (EQ-Index) by applying a formula that attributes different weights to each dimension’s levels. The anchor points or references of the questionnaire are 0 (death) and 1 (perfect health). We used the formula to assess adult health status in Spain [19]. This procedure has already been applied in similar studies [20,21]. The reliability and validity of the Spanish version of the EQ-5D-Y-3L has been confirmed [7] and the EQ-VAS allows subjects to assess their health status from 0 (worst) to 100 (best).
## 2.3.3. Statistical Analysis
A descriptive analysis using the means ± standard deviation (SD) for continuous variables and frequency distribution for categorical variables was used to obtain the characteristics of the sample.
## Population Norms
The EQ-5D-Y-3L population norms were derived from the data given by the general population sample. Analysis of the EQ-5D-Y-3L population norms followed the standardized method recommended by the EuroQol group [22].
## Feasibility
We computed the proportion of children not answering to a few (i.e., partially incomplete questionnaire) or all dimensions (i.e., incomplete questionnaire) of the EQ-5D-Y-3L.
## Ceiling Effect
The proportions of children reporting “no problems” were calculated for each descriptive system dimension. We also computed the children reporting “no problems” ratio in all five dimensions [11111]. We hypothesized that normal-weight children would report a higher ceiling effect than their counterparts.
## Discriminant and Convergent Validity
The discriminant validity of the EQ-5D-Y-3L was examined by comparing the HRQoL profiles of the different weight status groups (underweight, normal weight, overweight and obesity). The level of problems reported in each EQ-5D-Y-3L dimension per group was compared using Fisher’s exact test rather than the chi-square test because some cells were sparsely populated. Post hoc analysis using the Kruskal–Wallis H test indicated which groups were significantly different from each other. Following studies in overweight and obese children [23,24], we assumed that complaints of health problems would be more common among underweight, overweight and obese children and that these individuals would therefore have lower scores on the EQ-5D-Y-3L dimensions and EQ-VAS than normal-weight children. The convergent validity of the EQ-5D-Y-3L was examined by correlating the EQ-Index with the EQ-VAS through Spearman’s rho correlation. The correlation coefficient (ρs) was interpreted as follows: small, 0.10–0.29; moderate, ≥0.30–0.49; strong, ≥0.50 [25].
Convergent validity is the ability of the scores to correlate with other measures that assess a similar construct. In contrast, discriminant validity examines the relationships of scores obtained from similar but different constructs [25].
## 3. Results
Table 1 shows the characteristics of the general population sample. Overall, a total of 2204 children and adolescents responded to the set of questions in the EQ-5D-Y-3L. The sample distribution was higher for females (1313 ± $59.6\%$) than males (891 ± $40.4\%$). The proportions among weight status groups were dissimilar, with the majority of respondents in the normal-weight group ($43.5\%$). The mean ± SD of the EQ-Index by gender, age group and weight status group are also presented.
The frequency of reported problems by weight status group is shown in Table 2. Fisher’s exact and Kruskal–*Wallis analysis* showed nonsignificant differences ($p \leq 0.05$) in the distribution of problems for each dimension of the EQ-5D-Y-3L; therefore, there were no differences in problems reported for HRQoL among underweight, normal-weight, overweight and obese children over the EQ-5D-Y-3L dimensions. Thus, the discriminant validity of the descriptive system appeared to be lower and was unable to discern problems among children and adolescents with different weight status. In contrast, there were statistically significant differences in HRQoL reported on the EQ-VAS among all weight status groups. The ceiling effect (no problems reported) was relatively higher in the physical dimensions (mobility; looking after myself; doing usual activities), whereas the psychological dimensions (having pain or discomfort; feeling worried, sad or unhappy) showed a lower ceiling effect in all groups.
Finally, convergent validity was examined (Table 3). Spearman’s rho test showed a significant correlation ($p \leq 0.001$) between all dimensions in all groups and for the EQ-VAS, with the exception of the “looking after myself” and “feeling worried, sad or unhappy” dimensions in the overweight group. The magnitude of the correlation was low in all dimensions and groups, except for “mobility,” “doing usual activities” and “feeling worried, sad or unhappy” in the underweight group.
## 4. Discussion
This study has provided population norms for the EQ-5D-Y-3L questionnaire by using a representative sample of Chilean children and adolescents ($$n = 2204$$) and has demonstrated the psychometric properties in terms of feasibility and discriminant/convergent validity to determine the instrument’s ability to discern health states among weight status groups.
A strength of this study’s EQ-5D-Y-3L population norms was the neutral context sample with the responses pooled across different weight statuses. To date, this is the first study to present normative data in Chilean children and adolescents using the EQ-5D-Y-3L questionnaire. Other studies in Europe [7] or North America [26] have been conducted in the general population.
The main findings of this study were that the Spanish version of the EQ-5D-Y-3L is a feasible instrument to assess HRQoL in the Chilean population because there were no missing values. The results are consistent with previous research, including a multinational study performed to analyze the validity and reliability of the EQ-5D-Y-3L [7]. Our study identified a higher ceiling effect on the physical dimensions (mobility; looking after myself; doing usual activities) and a lower effect on the psychological dimensions (having pain or discomfort; feeling worried, sad or unhappy) in all groups. This ceiling effect is similar to previous studies in the general population [7]. Furthermore, a previous study that used the EQ-5D-Y-3L with overweight and obese children reported few problems in the majority of dimensions, except for anxiety/depression [27].
Another finding in this study was the scarce discriminant validity of the descriptive system of the EQ-5D-Y-3L between health states across weight status. There were no significant differences in the distribution of problems in each dimension among underweight, normal weight, overweight and obese children. These results are similar to previous studies [7,27,28]. In contrast, we found several reviews that analyzed how overweight and obesity affect children’s HRQoL [29,30]. However, the questionnaires that assessed HRQoL were Kidscreen, PedsQL and KINDL-R. These questionnaires are based on 5–7 levels of response, whereas the EQ-5D-Y-3L only has three levels of response. Moreover, we found other studies where the score from PedsQL or Kidscreen discriminates significant health status among weight status groups. The score for these questionnaires is based on a scale of 0–100, whereas the EQ-5D-Y-3L dimensions are based on a score of 1–3. Nevertheless, our study found that the EQ-VAS was discerned among health states across weight groups. This finding suggests that a scale such as the EQ-VAS, based on 0–$100\%$, may be more accurate in identifying health states than a descriptive system based on three levels of response. This low discriminant validity of the descriptive dimension may be due, first, to the high ceiling effect of this instrument. Second, there is the effect of non-dimensionality of the EQ-VAS, with the descriptive dimension and the EQ-VAS starting from a different scale: the descriptive system is based on five dimensions of the state of health and the EQ-VAS as a percentage of the state of health compared to the best imaginable. Therefore, the EQ-VAS can cover as many different dimensions of health as the respondent interprets, all reduced to a single value. Third, several studies indicate that the response in the EQ-VAS is influenced not only by health status but also by personal characteristics such as age, gender, education and race [31,32,33]. Expanding the severity levels in the EQ-5D-Y-3L can reduce the instrument’s ceiling effects and enhance sensitivity, especially in milder health conditions [34]. Thus, it is probable that discriminant validity will be better using the new EQ-5D-Y-3L-5L instrument [35,36].
The convergent validity of the EQ-5D-Y-3L dimensions for each weight status group showed a significant association with the EQ-VAS, but the magnitude of correlation in general was low. Thus, these results should be considered with caution.
Our study has certain limitations. We did not collect information concerning comorbidities or include other populations, such as hospitalized children or those with chronic diseases. These factors need to be considered when applying normative data in other groups or individuals. Furthermore, this study was observational; thus, we might have missed some confounders. Additionally, the method was self-administration, whereas other studies apply proxy administration. Another limitation is the low prevalence of severity of health problems captured by the instruments used. Although children with overweight or obesity have a lower HRQoL than children with a healthy weight [24], the baseline state of their HRQoL using currently available instruments and assessments starts from a high level, which limits the capture of possible improvements. This fact is determined by the ceiling effect of the questionnaire, which can be observed in the proportion of individuals with severe or large HRQoL problems. Likewise, this ceiling effect has been reported in the EQ-5D-EL-Y with studies on individuals without severe health problems [1]. In fact, the EuroQol group is developing a version of the questionnaire with five response levels (EQ-5D-5L-Y) to obtain greater scaling in certain populations. Therefore, our results should be considered with caution.
The study strengths include the large sample: 2204 Chilean children and adolescents. Moreover, the results of this study provide better understanding and use of the EQ-5D-Y-3L questionnaire in children with obesity and help in deciding whether to use this questionnaire over another and how to interpret the results.
## 5. Conclusions
The Chilean population norms for the EQ-5D-Y-3L reported here can be used as reference values when comparing different weight status groups. Furthermore, the study confirmed its feasibility even though the convergent and discriminant validity of the EQ-5D-Y-3L was insufficient. Consequently, we recommend that the results of future studies using the EQ-5D-Y-3L on children with heterogeneous weight status should be interpreted with caution. |
# A Digital Tool for Measuring Healing of Chronic Wounds Treated with an Antioxidant Dressing: A Case Series
## Abstract
[1] Abstract: Wound monitoring is an essential aspect in the evaluation of wound healing. This can be carried out with the multidimensional tool HELCOS, which develops a quantitative analysis and graphic representation of wound healing evolution via imaging. It compares the area and tissues present in the wound bed. This instrument is used for chronic wounds in which the healing process is altered. This article describes the potential use of this tool to improve the monitoring and follow-up of wounds and presents a case series of various chronic wounds with diverse etiology treated with an antioxidant dressing. [ 2] Methods: A secondary analysis of data from a case series of wounds treated with an antioxidant dressing and monitored with the HELCOS tool. [ 3] Results: The HELCOS tool is useful for measuring changes in the wound area and identifying wound bed tissues. In the six cases described in this article, the tool was able to monitor the healing of the wounds treated with the antioxidant dressing. [ 4] Conclusions: the monitoring of wound healing with this multidimensional HELCOS tool offers new possibilities to facilitate treatment decisions by healthcare professionals.
## 1. Introduction
A chronic wound, also called a hard-to-heal wound, has been defined as any wound that has not healed by 40–$50\%$ after four weeks of appropriate treatment [1]. Several factors can delay the physiological process of healing, including oxygenation, infection, diabetes, medications, stress, nutrition, hormones, and age [2].
When assessing the wound healing process, clinicians often face the problem of reliably measuring wound size. Wound measurement is important for monitoring the healing process of chronic wounds and to evaluate the effect of treatments. This is a practical problem as most of the measures used are subjective and based on the clinical experience of the professional.
In the last few decades, technological advances have led to the development of several accurate methods and multidimensional tools for wound monitoring: manual or digital planimetry, simple ruler method, mathematical models, digital imaging, or more recently three-dimensional (3D) [3]. As a result, wound monitoring is more objective and allows the identification of different parameters and variables through a specific analysis.
One of the multidimensional assessment tools recently developed is the HELCOS software, a web-based integrated wound management system that allows the measurement of different wound parameters through digital analysis of images of the wounds [4]. HELCOS was designed and developed between 2015 and 2017 through a project funded by the Spanish Pressure Ulcer and Chronic Wound Advisory Group. This tool has been available free of charge since 2017 for clinicians working in clinical settings; only a short registration is required. There are no special hardware requirements to use this tool, only a computer or device connected to the Internet, so it can be used directly in any environment (hospital, wound clinic, primary care). All personal data security standards are guaranteed; each professional can only access his/her own cases. To perform a wound analysis, the clinician has to obtain an image of the wound with any type of camera or device. Good lighting conditions are highly recommended, taking the picture at 20 to 30 cm, perpendicular to the wound plane and placing a size test of known diameter close to the wound (such as a blue circle 2 cm in diameter). Photos can be uploaded directly from a smartphone or using a computer.
HELCOS allows clinicians to measure the wound area and the proportion of the wound bed covered with granulation or necrotic tissue. We have tested this tool in a series of wound cases treated with an antioxidant dressing.
It is known that wound healing is impaired when the wound remains in the inflammatory stage for too long [5]. Oxidative stress is among the factors that can delay the healing process [2]. Reactive oxygen species (ROS) are small oxygen-derived molecules that play a crucial role in the preparation of the normal wound healing response [6]. Therefore, a suitable balance between the levels of ROS is essential. A wound with a low level of ROS protects tissues against infection and stimulates effective wound healing by promoting cell survival [7,8], whereas if there is excess ROS in the wound, the cells are damaged with pro-inflammatory status and produce oxidative stress [9].
Therefore, the use of antioxidant compounds for wound treatment is increasing and has excellent potential for clinical use. Antioxidant dressings that regulate this balance are a target for new therapies [10,11]. Among these new advanced products is the antioxidant dressing Reoxcare® [12], developed by Histocell (Bizkaia, Spain). This product combines an absorbent matrix obtained from the locust bean gum galactomannan, of plant-based origin, with an antioxidant hydration solution with curcumin and N Acetylcysteine (NAC) [13].
Curcumin is a natural phenol extracted from the *Curcuma longa* rhizome. It has anti-inflammatory, antibacterial, and antioxidant properties, which improve wound healing [14]. NAC is an antioxidant molecule that plays an important role in regulating redox status [15]. The three components act synergistically, giving the product a potent antioxidant activity. Due to the innovative design, this antioxidant dressing combines the advantages of moist healing in exudate management and free radical neutralization, achieving wound reactivation.
This antioxidant dressing was tested in different studies. In vitro studies and animal wound healing models have shown that this product modulates the inflammatory phase of wound healing, controlling the excessive cell activation and allowing a more orderly transition between the inflammatory, proliferative, and remodeling phases of wound healing [13]. A multicenter, prospective-case study series revealed that this dressing can be applied to wounds independently of their level of recurrence or severity, effectively eliminating the biofilm and facilitating the progression of the wound out of the inflammatory phase [16,17]. These findings suggest that the dressing could be a new advanced alternative for managing hard-to heal wounds. In other words, the value of antioxidant dressing in the management has been reported and shown positive results.
The purpose of this article is to describe the potential use of a web-based wound measurement system (HELCOS) for monitoring the progress of wound healing in a case series of wounds.
## 2.1. Study Design
This consists of a secondary analysis of a case series from the intervention group of the main study. This is a descriptive design of healing monitoring using the HELCOS tool.
The main study is a prospective intervention study with two arms, intervention (antioxidant dressing) and comparison (usual care with moist dressing) [18]. Advanced practice wound nurses recruited patients with chronic wounds in three primary health care centers in the Andalusian Health Service in Spain between September 2019 and October 2021.
The main study included 54 patients (28 intervention group and 26 comparison group). Patients were eligible if they were aged 18 years or older with the following: leg ulcer (venous, ischemic, traumatic, or diabetic foot ulcer), dehisced surgical wound healing by second intention, or pressure ulcers. Wound area was between 1 and 250 cm2. Exclusion criteria were systemic inflammatory disease or oncological disease, wounds with clinical signs of infection, terminal situation (life expectancy less than 6 months), pregnancy or wounds treated with negative pressure therapy.
A cut-off of 8 weeks (or healing if this occurred before 8 weeks) was established. A clinical nurse assessed patients at baseline and at weeks 2, 4, 6, and 8 to determine their evolution. Data collected from each patient included demographic characteristics, patient’s clinical background (concomitant medical diagnosis, clinical antecedents, nutritional status, smoking habit), description of the wound (etiology, size, location, specific characteristics), healing measured by RESVECH 2.0 score and variation in wound are measured by HELCOS tool.
## 2.1.1. Wound Management
Patients were managed according to a good standard of care. A general protocol for wound management was established: cleaning the wound with sterile physiological saline solution, debridement to deep clean the nonviable tissues in the wound bed, antioxidant dressing application as a primary dressing, and cover with secondary dressing. The dressing is kept in place for 2 to 3 days, according to the manufacturer’s recommendations and depending on the level of wound exudates.
## 2.1.2. Statistical Analysis
Descriptive statistics were used (mean and standard deviation for quantitative variables; frequency and percentages for nominal variables).
## 2.1.3. Ethical Aspects
The study was approved by the Ethics Committee of Jaen (Andalusian Health System) with reference number 0645-N-19. The study was conducted in accordance with the ethical principles of the Declaration of Helsinki. The patients provided written informed consent, which ensured data confidentiality.
## 3.1. Description of HELCOS Wound Healing Software
HELCOS is an integrated wound management system that calculates wound area and the relative percentage of tissue types in the wound bed using an image of the lesion. This image is loaded into the system and assigned to a patient and a case. For each patient, different images of the lesion can be attached over time to evaluate its evolution using different methods. This version is free and accessible in Spanish [4].
First, wound area is checked by measuring length and width directly with a graduated ruler (Kundin method) [19]. Then, it is estimated using digital analysis of wound photography.
Second, the relative percentage of tissue types in the wound bed (granulation, slough and necrotic tissue) is estimated. This software identifies tissue types by using different colors in the wound bed: red for granulation tissue, yellow for slough, and black for necrotic tissue. It also creates a graph showing the evolution of the percentage of tissue present in the wound bed over the follow-up period.
In addition, the RESVECH 2.0 scale is integrated in this software for evaluation of the status of the wound [20]. It assesses six aspects (wound size, depth/affected tissues, wound edges, type of tissue in the wound bed, exudate, and infection/inflammation). The score of this scale ranges from 0 points (wound healed) to 35 points (the worst possible status of the wound). A lower score means an improvement in the healing process. This scale is an excellent tool for comparing the data grouped according to the type of wound, recurrence, or severity.
## 3.2. Description of Wounds
In reference to the etiology of the wounds, $28.6\%$ were venous, $7.1\%$ ischemic, $7.1\%$ diabetic, $25\%$ traumatic, $10.7\%$ surgical wound, and $21.4\%$ pressure injuries. The wound locations were $42.9\%$ leg, $39.3\%$ food, $10.7\%$ gluteus/coccyx, $3.6\%$ abdomen, and $3.6\%$ upper limb. *The* general wound characteristics are presented below (Table 1). Twelve wounds treated with the antioxidant dressing were healed at 8 weeks ($42.86\%$) and 16 had an increase of $50\%$ or more in granulation tissue ($57.14\%$).
## 3.3. Healing Monitoring
We present several significant cases of wounds treated with the antioxidant dressing over eight weeks, which were monitored with the HELCOS software and achieved complete wound healing, significantly reduced wound area, or showed an important change in the tissues present in the wound bed.
The data and graphs presented in each of the cases refer to the analysis of the percentage of tissues present in the wound bed (granulation tissue and devitalized tissue—sloughed or necrotic) and the area of the lesion as analyzed with the HELCOS system and demonstrates wound follow-up.
Case 1. Traumatic leg wound.
A 59-year-old male presented with a traumatic wound on the lower limb, which was not healing (Figure 1). The initial area of the wound was 5.86 cm2, with a depth affecting muscle, defined borders, tissue compatible with biofilm, and desquamation on the perilesional skin. At the week 6 assessment, we observed complete wound healing (Table 2).
The tissues present in the wound bed showed a favorable evolution toward healing throughout the 6 weeks of treatment, decreasing the percentage of sloughed tissue present in the wound bed and increasing granulation tissue (Figure 2).
Case 2. Incised leg wound.
A 71-year-old male presented with a traumatic injury to the internal tibial area. This wound had damaged edges and abundant exudate (Figure 3). The initial area was 4.73 cm2, with $89.98\%$ devitalized tissue (necrotic/sloughed), and only $10.02\%$ was granulation tissue. Over 8 weeks, the wound area was reduced and the wound bed was cleaned, until reaching complete healing (Table 3) (Figure 4).
Case 3. Wound with venous etiology.
This was a 72-year-old woman with a venous wound in the anterior tibial area. In the initial assessment, the wound area was 12.31 cm2, the edges were damaged, and there was a saturation of exudate (Figure 5). The percentage of tissues in the bed was $60.58\%$ granulation tissue and $39.43\%$ sloughed tissue. At 8 weeks, the antioxidant treatment achieved wound closure, contributed to the removal of sloughed tissues, and induced granulation tissue formation (Figure 6). It should be noted that this treatment also significantly reduced pain; at the initial assessment, the patient presented $\frac{10}{10}$ on the Visual Analog Scale (VAS), $\frac{4}{10}$ at week 2, $\frac{1}{10}$ at weeks 4 and 6, and no pain by week 8.
Case 4. Traumatic cavity wound.
This was a 67-year-old male with a cavity wound of traumatic etiology located in the lower extremity. This clinical case stands out for its rapid evolution. The initial area was 5.42 cm2, highlighting the depth of the cavitation, but in just four weeks he achieved complete healing and a favorable evolution of the tissues (Figure 7 and Figure 8). It should also be emphasized that initially he reported a $\frac{10}{10}$ on the VAS pain scale, which decreased to $\frac{5}{10}$ in week 2, and completely disappeared in week 4.
Case 5. Diabetic foot ulcer.
This was a 57-year-old man with a diabetic foot ulcer that had an initial area of 1.54 cm2, and closed at week 8 (Figure 9). The antioxidant treatment was able to clean the wound bed, completely eliminating the sloughed tissue and facilitating the production of granulation tissue. At baseline, the wound had $76.19\%$ granulation tissue and $23.81\%$ sloughed tissue; from week 2 to week 8 only granulation tissue was observed in the wound ($100\%$) (Figure 10).
Case 6. Dehiscence surgical wound.
This was a 75-year-old male presented with a surgical wound in the lower limb that was healing by second intention. The wound had muscle involvement, thickened borders, and exudate leakage. This injury stands out for two aspects, firstly, its initial surface; it was a large wound (27.41 cm2), which reduced in size by $50\%$ (13.88 cm2) at week 8 (Figure 11). Second was the favorable evolution in the percentage of tissues present in the wound bed. Table 4 shows how from week 4 and coinciding with the overcoming of the inflammatory phase of the wound, which is where the antioxidant dressing has its difference in effect with respect to other therapeutic strategies, it was possible to invert the percentage of tissue in bed, with granulation tissue predominating and devitalized tissue decreasing (Figure 12). This wound reached complete healing at week 13, outside the follow-up period established in the study.
## 4. Discussion
Wound monitoring is an essential action, providing baseline measurements, and guides us in assessing wound healing [21]. However, monitoring methods need to be accurate, reliable, and feasible in order to assess the healing process.
According to the available scientific evidence, the use of digital planimetry or digital images are highly recommended. This method provides high precision in measurements of the wound area and the tissues present in the lesion bed [3].
Based on the results of our study, the HELCOS software [22] is a complete multidimensional tool performing quantitative comparison both of the wound area and of the different types of tissues present in the wound bed throughout the follow-up period. Moreover, this information is provided through descriptive data and graphical representations. The graphs help to interpret the numerical data obtained and visually improve the interpretation of the evolution analysis performed. In addition, HELCOS [22] includes wound assessment with the validated RESVECH 2.0 scale [20]. Digital or web-based tools for wound measurement and monitoring can be a useful resource in clinical studies.
In addition, some of the data obtained in these cases align with two previously published observational studies with this antioxidant product. One was a multicenter case series developed by Castro et al. in 2017 [16] with 31 patients with acute and chronic wounds, with a follow-up period similar to ours. It describes that at the end of the follow-up period, $29\%$ of the wounds healed completely, while in our study this was $42.85\%$. Regarding the variation in the RESVECH scale, Castro et al. describes a decrease in the average score of 10.16 points; similarly, in our study it was 7.89 points [16].
The other observational study mentioned was developed by Jiménez-García et al. [ 17], in which 31 patients with chronic wounds were included with a follow-up period of 12 weeks. The results described the evolution of wound healing evaluated by RESVECH 2.0, with a $67.8\%$ reduction at week 12 after using the antioxidant dressing. Likewise, the percentage of wound healing increased significantly over time, and was $71\%$ at week 12. During the follow-up time, $50\%$ of the wounds healed completely.
One of the strengths of this study is the use of the HELCOS web-based tool, which can help clinicians differentiate between different types of tissue in the wound bed and monitor healing. This article is one of the first reports of the performance of this tool in a real context.
However, the use of this tool is not without limitations. Digital images can be affected by lighting, location, and variability when shooting, leading to an underestimation of the wound analysis [23], so it is recommended to standardize the lighting conditions for the picture.
## 5. Conclusions
The results obtained indicate that wounds monitoring helps improve healing, facilitating clinical decision-making in healthcare. For this reason, it is necessary that the measurement and monitoring methods are precise, reliable, and viable for their correct application in daily clinical practice. This is also reflected in how the use of digital applications in measuring and evaluating wounds is increasingly widespread. The HELCOS web-based system is a user-friendly and useful resource available to clinicians for wound analysis and wound healing monitoring. The antioxidant dressing used in these cases is an alternative for wound management that merits further research. |
# Neuropsychological Performance and Cardiac Autonomic Function in Blue- and White-Collar Workers: A Psychometric and Heart Rate Variability Evaluation
## Abstract
The 21st century has brought a growing and significant focus on performance and health within the workforce, with the aim of improving the health and performance of the blue- and white-collar workforce. The present research investigated heart rate variability (HRV) and psychological performance between blue and white-collar workers to determine if differences were evident. A total of 101 workers ($$n = 48$$ white-collar, $$n = 53$$ blue-collar, aged 19–61 years) underwent a three lead electrocardiogram to obtain HRV data during baseline (10 min) and active (working memory and attention) phases. The Cambridge Neuropsychological Test Automated Battery, specifically the spatial working memory, attention switching task, rapid visual processing and the spatial span, were used. Differences in neurocognitive performance measures indicated that white-collar workers were better able to detect sequences and make less errors than blue-collar workers. The heart rate variability differences showed that white-collar workers exhibit lower levels of cardiac vagal control during these neuropsychological tasks. These initial findings provide some novel insights into the relationship between occupation and psychophysiological processes and further highlight the interactions between cardiac autonomic variables and neurocognitive performance in blue and white-collar workers.
## 1. Introduction
In the 21st century, productivity is a crucial element in the strength and sustainability of a company’s gross business performance [1]. Both white-collar and blue-collar professions often require executive function to perform the tasks required for their work. However, compared to white-collar workers, blue-collar employees have been shown to have a higher prevalence of a large range of health complications, particularly cardiovascular disease (CVD) [2]. The workplace can often play a major role in the onset of cardiovascular disease and the current European guidelines on the prevention of CVD recommend an assessment of long-term stress, which includes occupational psychological stressors [3].
Executive cognitive function refers to a family of mental processes that are recruited for concentration and attention [4]. These executive functions have also been implicated in other aspects of health, such as obesity [5], occupational prosperity [6], and public safety [7]. Increasing evidence suggests an association between CVD and reduced psychological performance; however, few recent studies have delved into the inner workings which relate working memory (WM) to CVD. Additionally, many previous studies linking memory and working memory deficits to cardiac failures have mostly focused on patients with severe CVD [8].
Heart rate variability (HRV) has been extensively used to reflect the sympathetic and parasympathetic activity of the autonomic nervous system [9]. Furthermore, previous research has linked HRV to CVD [10,11], as well as various psychological processes [12,13]. Hansen et al. [ 12] established a relationship between HRV and performance tasks that taxed executive function in normal subjects ($$n = 53$$ male, average age = 23 years) and found that the qualitative differences between task demands could be predicted by the subject’s cardiac vagal tone. Other researchers have investigated this connection, but vagal tone relationships remain largely unexplored [14]. Furthermore, in order to predict cognitive performance by utilising cardiac vagal tone as an independent variable, Johnsen et al. [ 15] investigated attentional bias in 20 patients with anxiety in a dental setting using a modified Stroop-test [16] (14 male and 6 female, mean age = 36 years). Results showed that poor attentional performance was characterized by reduced HRV as compared to patients with higher HRV [15].
This indication of decreased HRV with increased working memory load and higher HRV in better performers supports the notion that, during working memory function, HRV may qualitatively predict cognitive differences among individuals [17]. This also implies that executive performance and autonomic functions, such as HRV, may be adaptively regulated by an interrelated neural network. Therefore, HRV may provide an index of an individual’s ability to function effectively in a dynamic environment [17].
Limited research has linked working memory and attentional deficits to cardiac deficits [18], with most studies focused on end stage patients [19]. Therefore, more research needs to be centred around healthy individuals, which may implicate HRV as a pre-emptive biomarker for working memory and attentional performance.
This study aims to investigate neuropsychological processes (working memory and attention) in two major working populations, white-collar ($$n = 48$$) and blue-collar ($$n = 53$$) workers, further identifying the fundamental associations between working memory, attention, and HRV. Heart rate variability and executive function are evaluated in a sample of healthy blue and white-collar workers to better understand the cardiac autonomic vagal influence during neuropsychological performance and risk factors that may contribute to cardiovascular complications. It was hypothesized that [1] attentional states will increase cardiac vagal input, HF and RMSSD HRV in white-collar workers while indicating a decrease in blue-collar workers, and [2] spatial neuropsychological stress will exhibit a decrease in cardiac vagal input, HF and RMSSD HRV in white-collar workers and an increase in blue-collar workers.
## 2.1. Participant Recruitment
Healthy participants between the ages of 18–68 years ($$n = 101$$) were recruited from the community. Participants were required to abstain from caffeine and nicotine for 4 h and alcohol for 12 h prior to the commencement of testing. These factors are known to influence physiological measures and their restrictions enhance the reliability of the data. Additionally, participants with pre-study blood pressure (BP) measures greater than 160 mmHg (systolic)/or 100 mmHg (diastolic) were excluded [20]. Testing was conducted between 8:30 am and 12:00 pm to minimize the effect of circadian rhythm fluctuation [21] on the data obtained. No volunteer was excluded from the current study and written informed consent was obtained prior to commencement of the study protocol. This study was approved by the Institutional Human Research Ethics Committee of the University of Technology Sydney (HREC: 2014000110 and HREC ETH19-3676).
## 2.2. Experimental Methodology
Participants were seated for 5 min prior to three BP recordings using an automated monitor (OMRON IA1B, Kyoto, Japan). Three blood pressure readings were obtained both before and after the study protocol with 2-min intervals between each measurement [22]. Participants were then asked to complete the General Health Questionnaire (GHQ60) [23], which obtained detailed health information. Participants then underwent a baseline electrocardiogram (ECG) for 10 min followed by an ECG recording during the neurocognitive tasks performed. The ECG was obtained using a FlexComp Infiniti encoder (Thought Technology Ltd., Montreal, QC, Canada) and an ECG-Flex/Pro amplifier sensor (Thought Technology Ltd., Canada) connected to three electrode leads. BioGraph Infiniti software (T7900) (Thought Technology Ltd., Canada) was used to record and display the ECG wave. Prior to placement of the electrodes, the skin was cleaned using Liv-Wipe (Livingstone International Pty Ltd., Sydney, Australia) $70\%$ alcohol swabs. Disposable electrodes were used in all cases (Ag/AgCl ECG electrodes (Red Dot TM) 2239, Tukwila, WA, USA).
The electrodes were placed in an inverted triangle to allow for positive deflections corresponding to the P, Q, R, S, and T waves [24]. The negative electrode was placed beneath the right clavicle, the ground electrode was placed beneath the left clavicle, and the positive electrode was placed 2 centimeters beneath the sternum and over the xyphoid process. Additionally, the ECG was sampled at 2048 samples per second for high precision detection of successive heart beats [25].
## 2.2.1. Neuropsychological Tasks
The tasks performed utilized the Cambridge Neuropsychological Test Automated Battery (CANTAB) and tests included were the spatial working memory (SWM) task, attention switching task (AST), rapid visual processing task (RVP), and the spatial span (SSP) task [26]. The SWM task requires the retention and manipulation of visuospatial information. Outcome measures include errors, strategy, and latency. The AST is a test of a participant’s ability to shift attention between tasks and to ignore irrelevant information during interfering and distracting events. This test measures top-down cognitive control and provides measures of latency and errors. The RVP task is a measure of sustained attention assessing latency, probability, and sensitivity to pattern recognition. Finally, the SSP task is an assessment of working memory capacity and provides outcome measures of span length, errors, attempts, and latency.
## 2.2.2. Heart Rate Variability
Prior to statistical analysis, ECG data was pre-processed to obtain time and frequency parameters of heart rate variability (HRV). The ECG data was imported into Kubios HRV software (Version 3.1, University of Kuopio, Kuopio, Finland). The R-waves were automatically detected by applying the built-in QRS detection algorithm [27]. Frequency bands obtained were low frequency (LF) (0.04–0.15 Hz), high frequency (HF) (0.15–0.4 Hz), total power HRV (TP), and the ratio of LF to HF (LF/HF). The inbuilt process within Kubios and the smoothness priors method was used to correct for artefacts and ectopic beats in the raw ECG data [27,28]. It should also be noted that the data were log-transformed prior to analysis, where relevant.
## 2.3. Statistical Analysis
Statistical analysis was performed using SPSS Version 22.0 (IBM Corp., 2013, New York, NY, USA) IBM Corp [29] with statistical significance reported at $p \leq 0.05.$ Independent sample t-tests were applied to establish significant differences in HRV parameters and neurocognitive performance measures between the blue and white-collar workers.
## 3.1. Demographic Data of Blue and White-Collar Workers
The demographic data of the blue and white-collar groups are shown below in Table 1. Compared to the blue-collar workers, the white-collar workers had spent significantly more time in education (3.4 ± 1.2 years and 4.33 ± 1.2 years, respectively) ($p \leq 0.001$).
## 3.2. Neuropsychological Performance of Blue and White-Collar Workers
Independent sample t-tests of neuropsychological performance showed significant differences in the tasks (SWM, AST, RVP, SSP) between white ($$n = 48$$) and blue-collar ($$n = 53$$) workers. The significant findings are presenting in Table 2.
Attention Switching Task: During the AST, the white-collar sample group were found to make fewer errors when incongruent cues were given compared to the blue-collar worker group (8 ± 3.14 and 9.4 ± 3.62, respectively) ($$p \leq 0.04$$). When the “side” cue was given, the white-collar worker group made more errors than the blue-collar worker group (4.33 ± 2.56 and, 3.19 ± 2.16, respectively) ($$p \leq 0.02$$). Moreover, the white-collar worker group made significantly more correct responses than the blue-collar worker group overall (144.20 ± 8.53 and, 139.43 ± 9.80, respectively) (t = $$p \leq 0.01$$).
Rapid Visual Processing Task: Throughout the RVP task, the ability to detect signals was significantly higher in the white-collar worker group as compared to the blue-collar worker group (0.90 ± 0.08 and, 0.85 ± 0.08, respectively) ($$p \leq 0.002$$).
Spatial Span Task: Finally, the SSP task saw the white-collar worker group make more total errors than the blue-collar worker group (13.48 ± 5.65 and, 9.74 ± 6.55, respectively) ($$p \leq 0.003$$).
## 3.3. HRV in Blue and White-Collar Workers
Independent sample t-tests were used to compare HRV parameters between the white and blue-collar worker groups. The significant findings are summarised in Table 3.
Spatial Working Memory Task: Throughout the SWM task, the white-collar worker group, compared to the blue-collar worker group, had significantly higher log LF (6.33 ± 0.60 and 6.01 ± 0.48, respectively) ($$p \leq 0.004$$), log LF/HF (1.61 ± 0.83 and 1.22 ± 0.84, respectively) ($$p \leq 0.02$$), and log TP (6.7 ± 0.52 and 6.48 ± 0.37, respectively) ($$p \leq 0.02$$).
Rapid Visual Processing Task: The RVP task highlighted significantly lower HRV parameters in the white-collar worker group compared to the blue-collar worker group, particularly log LF (6.16 ± 0.47 and 6.44 ± 0.33, respectively) ($p \leq 0.001$), log HF (5.04 ± 0.60 and 5.28 ± 0.42, respectively) ($$p \leq 0.03$$), log TP (6.61 ± 0.44 and 6.89 ± 0.27, respectively) ($p \leq 0.001$), and log SDNN (3.39 ± 0.22 and 3.53 ± 0.12, respectively) ($p \leq 0.001$).
Spatial Span Task: During the Spatial Span (SSP) task, it was found that log HF was significantly lower in the white-collar worker group as compared to the blue-collar worker group (4.81 ± 0.58 and, 5.07 ± 0.67, respectively) ($$p \leq 0.03$$).
## 4. Discussion
The present study aimed to investigate the differences in HRV and psychological performance between a sample of blue and white-collar workers. The analysis indicated higher vagal cardiac mediation in blue-collar workers, as indexed by RMSSD and HF HRV, in response to spatial working memory and attention based cognitive tasks. Additionally, these results show that blue-collar workers performed significantly better on spatial tasks while white-collar workers performed better on attentional process tasks.
The current literature comparing these two sub groups is very limited; however, early work by Myrtek [29] investigating the level of stress and strain and its relationship to heart rate, physical activity, emotional strain, and mental strain found no differences in variability of heart rate (HR) between the two groups. The authors did, however, find that white-collar workers were more stressed, subjectively [29]. Additionally, it is thought that blue-collar workers are subject to an increased physical workload while white-collar workers are thought to have a high mental workload, and although interviews and questionnaires supported this idea, the physiological measurements did not [29].
Early work in the literature highlights conflicting evidence regarding the predisposition of blue and white-collar workers to CVD with some studies suggesting blue-collar workers were more at risk [30] while others suggested white-collar workers were more at risk [2]. Moreover, there is very little research investigating HRV parameters, psychological performance measures, and their associations with CVD in these two cohorts, and the present research aimed to provide more information and data regarding the relationship between different occupational and physiological risk measures and CVD.
When comparing the two sample cohorts, the only statistically significant difference in demographics was the years spent in education, where the blue-collar workers had spent less time in education than the white collar workers. Interestingly, Prihartono et al. [ 2] found that the increased level of education of white-collar workers significantly increased the prevalence of CVD. Moreover, prevalence of CVD by diagnosis was higher in the white-collar worker population, while the prevalence by symptoms was higher among the blue-collar worker group [2]. Even though the blue-collar workers are inherently more physically active in their day to day work, their socio-economic status and lifestyle choices may have a significant impact, particularly in the available access to health care. Lower education and lower salaries are more likely to predispose to unhealthy lifestyle choices [31]. Moreover, a higher BMI increased the prevalence of CVD in both blue and white-collar workers [2].
## 4.1.1. Spatial Working Memory
During the SWM task, the LF, LF/HF, and TP parameters of HRV were all greater in the white-collar worker group compared to the blue-collar worker group. LF HRV was traditionally thought to reflect sympathetic activity, as previously mentioned, but recent research indicates it is influenced by both the sympathetic and parasympathetic branches of the ANS [32]. This increase in LF HRV activity may point to increased sympathetic activity and dominance during these tasks for the white-collar worker group. This finding has been previously associated with an increased risk of CVD [33]. This has also been contrasted by other literature reporting that low LF HRV was associated with certain risk factors which predispose to CVD, for example, hypertension [34]. Moreover, a review by Hillebrand et al. [ 35] highlighted that low HRV indices, including LF HRV, indicated a higher risk of CVD in populations without any prior CVD. Interestingly, much of the prior research indicates that vagal withdrawal, and therefore an increased sympathetic response, is responsible for the cardiovascular disease risks [36]. However, Hamaad et al. [ 33] provide a differing perspective, suggesting that it is sympathetic activation which may be associated to cardiac events, and not the former. The authors of [33] investigated the associations between indices of HRV (time and frequency) and inflammatory biomarkers in patients with acute coronary syndrome ($$n = 100$$, male = 77, average age = 63 ± 12 years) and healthy controls ($$n = 49$$, male = 32, average age = 60 ± 10 years). Though the correlations were modest, the authors reported an inverse relationship between LF HRV and inflammatory biomarkers and, therefore, implicate sympathetic tone in CVD [33]. This idea is further supported by several studies which further investigate the inflammatory biomarkers and associated HRV changes [37].
## 4.1.2. Rapid Visual Processing
The RVP task showed that the blue-collar workers had higher HRV parameters across the board, particularly LF, HF, TP and SDNN. This is an interesting finding as the RVP task is one of sustained attention, and it was therefore expected that the white-collar worker group would exhibit higher levels of cardiac vagal control, as indexed by HF HRV or RMSSD. The findings of the present research may reflect high levels of stress within the white-collar working population as shown by lower HF HRV. Previous research concluding which occupational group is more stressed is contentious, and the literature suggests a multitude of variables that may contribute. Dedele et al. [ 38] indicate that blue-collar workers are 1.5 times more likely to perceive higher levels of stress in general. However, the white-collar workers had a four times increased likelihood of perceiving greater stress when they had been sedentary for more than 3 h per day [38]. Contrastingly, Nydegger [39] found no significant differences in stress levels between blue and white-collar workers, nor any differences between genders. Given that these studies only assessed perceived stress by way of surveys, the results may be too subjective, with numerous factors potentially influencing the responses. The use of a more objective measure would have been of great benefit to support their findings. Notwithstanding, they do provide grounds to indicate intricate interrelationships between workplace stress, HRV, and CVD. Moreover, recommendations made to white-collar workers include making improvements in sedentary lifestyle and increasing physical activity during work hours, while blue-collar workers must avoid unhealthy lifestyle habits [39,40]. These practices will ultimately reduce stress, improve cardiac autonomic activity and parasympathetic input, and therefore may reduce the risk of a cardiovascular event.
## 4.1.3. Spatial Span
The final difference in HRV between the blue and white-collar worker group found in this study was related to the SSP task, whereby the blue-collar worker group showed higher vagal mediation than the white-collar worker group. This is indicative of better control and better performance. Moreover, it may indicate a more relaxed scenario, as the SSP task is designed to evaluate working memory capacity in the 3D space around them, an environment familiar to blue-collar workers.
## 4.2. Comparison of Neuropsychological Performance between Blue and White-Collar Workers
Occupation has been considered as an important predictor of cognitive ability and decline over time [41]. Furthermore, the executive function requirements in the workplace, as well as the complexities of the environment, seem to have a correlation to cognitive decline [42]. Prior research has tended to be focused on age-related decline in cognitive processing and few studies have focused on the occupational effects. However, given that people spend a substantial portion of life at work, the workplace environment may have a significant effect [43].
## 4.2.1. Attention Switching
The AST showed that the white-collar workers made less errors when the cues were changing and more errors when the “side” cue was given. However, in the task as a whole, the white-collar workers gave significantly more correct responses than the blue-collar workers. In a longitudinal study spanning 10 years, Kim et al. [ 44] assessed executive function in blue ($$n = 1216$$, $61\%$ Female, aged 70.7 ± 4.64 years) and white-collar workers ($$n = 242$$, $22\%$ Female, aged 69.98 ± 4.18 years). The authors gathered data using the Mini-Mental Sate Examination (MMSE) [45] and other potential covariates, including sociodemographic factors, health related factors and occupational factors [44]. Primary findings between the longest-held lifetime occupation and executive function decline showed that males had no significant risks, whilst females showed a 2.5-fold increased risk of cognitive impairment amongst blue-collar workers compared to white-collar workers [44].
## 4.2.2. Rapid Visual Processing/Spatial Span
The white-collar workers showed significantly better performance during the RVP task, where their ability to detect sequences was much better. However, the white-collar workers made more errors during the SSP task. The relationship between mental workload and cardiovascular parameters is further illustrated by Capuana et al. [ 46]. These authors assessed 22 young adults (17 women, 18–27 years, average age = 20.5 years (SD not specified)) and 18 older adults (11 women, 65–83 years, average age = 72.3 (SD not specified)) and indicated relationships between cardiac measures and performance, as well as an association between increased cardiac workload and more errors in the older adults but not the younger adults [46]. This further supports and adds to the age-related literature regarding neurocognitive performance with the added element of cardiac risk measures. The results of previous literature and the present findings suggest that the effects of occupation on executive functions are multifaceted [41]. Prior research has indicated that white-collar workers are more cognitively inclined in the later years [41]. Moreover, manual labor workers (including machine operators, assembly workers and plant operators) have been shown to have a significantly higher chance of reduced executive function as compared to non-manual laborers (including business executives, administrators, and managers) [47]. As a whole, the white-collar workers seem to have performed better on the executive function tasks. Notwithstanding the varying performance on different tasks, an in depth analysis must be conducted to supplement broader examinations in order to identify specific relationships between cardiac variables and neurocognitive performance measures.
Several factors may be considered when assessing the performance and risks between the blue and white-collar worker populations. Most people spend a large portion of their life at work, and so the inherent risks related to employment are something that must be further researched. These risks may be a result of the complexity in given occupations, which was first touched upon by Schooler [48] and further by Schooler et al. [ 49]. These authors suggested that complex environments at work, or during leisure time, allow for continued reinforcement of executive function. This greater intellectual stimulation increases neural growth and synaptic density, which protects against cognitive decline [50]. Therefore, lower intellectual demands for blue-collar workers may predispose them to executive function impairments. This is just one facet by which the literature suggests the enhanced ability of white-collar workers. Another theory indicates that, since blue-collar work is associated with a lower income, this translates to poor housing, nutrition, environment, and poor lifestyle habits and practices, which may be linked to cognitive decline [51,52]. Interestingly, white-collar workers are more educated in the traditional sense, but this does not necessarily reflect in overall intelligence. Given that white-collar workers are known to use cognitive abilities more often than blue-collar workers, it could be assumed that they have superior cognitive abilities. This may not be the case however, as a study showed that there was no evidence that regular use of computerized brain trainers improves general cognitive functioning [53].
## 4.3. Limitations and Future Directions
The present findings show perhaps that changes in HRV are in fact influenced by various tasks, spanning all professions. Increased sample numbers in each profession would allow for stratification and observations within the same job type. For example, one white-collar worker may perform more administrative tasks while another may perform more data analytics and these differences in neuropsychological load may further influence HRV. Moreover, this cross-sectional design provides a snapshot in time of the measures. Therefore, a longitudinal study would allow for a more in-depth analysis of how a particular profession may influence these physiological variables over the course of one’s life. It is also acknowledged that, even though only $18\%$ of the blue-collar worker group was made up of female workers, this is an accurate reflection of this population sample [54]. Though the present study identified numerous findings, it may only be predictive in nature and not causal. Therefore, future studies may be able to investigate the causal link between vagal tone, working memory, and attention through various techniques, such as transcutaneous vagal nerve stimulation or other neuroimaging techniques.
## 5. Conclusions
Overall, the present research identified multiple significant differences in HRV parameters and neurocognitive performance measures between the blue and the white-collar workforce. Blue-collar workers indicated higher vagally mediated cardiac control during neuropsychological tasks with better performance in spatial working memory exercises, whilst white-collar workers had superior performance on attention-based tasks.
Notably, reduced parasympathetic modulation of the heart, particularly in white-collar workers, was observed. |
# Glucokinase Inactivation Ameliorates Lipid Accumulation and Exerts Favorable Effects on Lipid Metabolism in Hepatocytes
## Abstract
Glucokinase-maturity onset diabetes of the young (GCK-MODY) is a kind of rare diabetes with low incidence of vascular complications caused by GCK gene inactivation. This study aimed to investigate the effects of GCK inactivation on hepatic lipid metabolism and inflammation, providing evidence for the cardioprotective mechanism in GCK-MODY. We enrolled GCK-MODY, type 1 and 2 diabetes patients to analyze their lipid profiles, and found that GCK-MODY individuals exhibited cardioprotective lipid profile with lower triacylglycerol and elevated HDL-c. To further explore the effects of GCK inactivation on hepatic lipid metabolism, GCK knockdown HepG2 and AML-12 cell models were established, and in vitro studies showed that GCK knockdown alleviated lipid accumulation and decreased the expression of inflammation-related genes under fatty acid treatment. Lipidomic analysis indicated that the partial inhibition of GCK altered the levels of several lipid species with decreased saturated fatty acids and glycerolipids including triacylglycerol and diacylglycerol, and increased phosphatidylcholine in HepG2 cells. The hepatic lipid metabolism altered by GCK inactivation was regulated by the enzymes involved in de novo lipogenesis, lipolysis, fatty acid β-oxidation and the Kennedy pathway. Finally, we concluded that partial inactivation of GCK exhibited beneficial effects in hepatic lipid metabolism and inflammation, which potentially underlies the protective lipid profile and low cardiovascular risks in GCK-MODY patients.
## 1. Introduction
Glucokinase (GCK) catalyzes the phosphorylation of glucose to glucose 6-phosphate and is generally considered the initial glucose-sensing component and gatekeeper for glucose metabolism. *The* gene expression and protein level of GCK are enriched in the pancreas, liver, intestine, hypothalamus and pituitary [1]. In pancreatic β-cells, GCK participates in the regulation of glucose-induced insulin secretion. In the liver, GCK plays a leading role in glycogen synthesis and glycolysis [2].
Due to its important role in glucose homeostasis, the loss of function of GCK leads to diseases. Glucokinase-maturity-onset diabetes of the young (GCK-MODY) is caused by heterozygous inactivating mutations in GCK and impaired glucose sensing. However, unlike type 1 and 2 diabetes (T1D and T2D) or other MODYs, patients with GCK-MODY generally have a favorable prognosis without the requirement of antidiabetic treatment [3]. Additionally, GCK-MODY patients rarely suffer cardiovascular complications with the same risks as nondiabetic healthy individuals [4,5]. The low occurrence of vascular complications in GCK-MODY makes it a natural model for investigating the protective mechanisms of cardiovascular disorders under prolonged hyperglycemia. Research has shown that GCK-MODY individuals exhibit favorable serum lipid, with lower levels of triacylglycerols (TAGs) and higher high-density lipoproteins (HDLs), even compared with healthy subjects [6]. Our previous work [7] demonstrated that compared to T2D, several serum phosphatidylcholines (PCs) and plasmalogen PCs (PCps) were significantly increased in GCK-MODY, which contribute to the antiapoptotic and anti-inflammatory effects of HDL. Furthermore, evidence has suggested that the distinct lipid profile of GCK-MODY individuals exerts cardioprotective effects [8]. On the other hand, GCK activators (GKAs) have been reported to cause adverse effects including hyperlipidemia, hepatic fat accumulation and hepatic steatosis, in addition to hypoglycemic effects in both clinical trials [9,10,11] and animal studies [12,13], indicating the potential roles of GCK in maintaining lipid homeostasis.
Overall, current observations illustrated that in addition to glucose homeostasis, GCK also plays a crucial role in regulating lipid metabolism, and the inactivation of GCK may underlie the antiatherogenic profile associated with GCK-MODY. However, the association between GCK mutations and lipid profile and its underlying mechanism remains undefined. Given the critical role of the liver in lipid homeostasis of the body and the relatively high expression of GCK in the liver, we speculate that partial inactivation of GCK could exert favorable effects on hepatic lipid metabolism, probably through regulating key enzymes involved in metabolism pathways, thereby contributing to the cardioprotective lipid profile of GCK-MODY. The objective of the present study was to explore the protective lipid profile in GCK-MODY patients compared with T1D and T2D and the effects of GCK knockdown on hepatic lipid accumulation and inflammation in cell models.
## 2.1. GCK-MODY Patients Exert Favorable Lipid Profile
The characteristics and lipid profile of GCK-MODY, T1D and T2D patients and nondiabetic control subjects are shown in Table 1. In accordance with the type of diabetes, the glucose profile including FBG ($p \leq 0.0001$), HbA1c ($p \leq 0.001$) and GA ($p \leq 0.0001$) were increasingly elevated in three patient groups. The lipid metabolic profiles of GCK-MODY were significantly improved compared to T1D and T2D and were comparable to the normal control. The levels of TAG ($p \leq 0.0001$), TC ($p \leq 0.0001$) and LDL-c ($p \leq 0.0001$) were remarkably decreased in GCK-MODY compared with T2D. Additionally, a significant elevation in HDL-c was also shown in GCK-MODY compared with both T1D ($$p \leq 0.0060$$) and T2D ($p \leq 0.0001$). Furthermore, the level of CRP was also lower in GCK-MODY than T1D ($$p \leq 0.0256$$) and T2D ($$p \leq 0.0168$$), indicating a reduced cardiovascular risk.
## 2.2. GCK Knockdown Improved Lipid Accumulation in HFA-Treated HepG2 Cells
As the liver is the central organ of lipid metabolism in the body, we examined the effects of GCK inactivation on hepatic lipid metabolism via establishing in vitro liver cell models to explore the possible mechanism of the unique lipid profile in GCK-MODY individuals. Lentivirus transfection was applied to generate stable GCK knockdown in the human HpeG2 cell line. Glucokinase activity determination and Western blot were used to validate the transfection efficacy. The glucokinase activity was significantly reduced by $50\%$ in GCK knockdown HepG2 cells ($p \leq 0.0001$) (Figure 1A). Consistently, the level of GCK protein also displayed remarkable downregulation ($$p \leq 0.0017$$) (Figure 1B).
To investigate the impacts of GCK inactivation on hepatic lipid metabolism, the HepG2 cells were challenged with HFA to induce lipotoxicity. Oil Red O staining suggested that the knockdown of GCK significantly alleviated the HFA-treated lipid accumulation of HepG2 cells (Figure 1C). Meanwhile, the intracellular TAG content was also significantly reduced in the GCK knockdown group under HFA challenge ($p \leq 0.0001$) (Figure 1D). These results indicated that GCK knockdown reduced TAG content and prevented lipid accumulation in HFA-treated HepG2 cells.
## 2.3. Lipid Profile in GCK Knockdown HepG2 Cells
Lipidomic analysis was applied to reveal the entire lipid content variation caused by GCK knockdown (Figure 2). The score scatter plot of OPLS-DA (Figure 2A) showed that the samples from the control and GCK knockdown groups were independently grouped in both negative and positive ionization modes, which implied that the partial inactivation of GCK resulted in significant changes in the lipidome in HepG2 cells. Hierarchical cluster analysis was also performed on the screened differential metabolites, based on the threshold of variable importance values (VIP > 1.0) and p values (<0.05) (Figure S1).
The top 30 lipid metabolites were selected for further analysis based on the VIP score. The statistical data are depicted in a heatmap (Figure 2B). The selected metabolites could be classified as five lipid species, including glycerophospholipid, glycerolipid, sphingolipid, acylcarnitine and glycolipid. The bar chart shows the relative differences in the lipid species in the GCK knockdown group compared with the control (Figure 2C). The abundance of the identified lipids displayed a significant increase in PC, Cer, ACar and SQDG in the GCK knockdown group, as well as remarkable downregulation in PE, TAG, DAG, GM3 and GlcADG (lipids species detected in lipidomic analysis are shown in Table S1).The fold changes in the top 30 lipids are displayed in a matchstick plot (Figure S2).
## 2.4. GCK Knockdown Altered Hepatic Lipid Metabolism
Particularly, we analyzed the abundance of the differential metabolites involved in lipid metabolism pathways (Figure 3). Although fatty acids were not one of the top 30 lipids selected by VIP, due to their important role (the precursors of nearly all the kinds of lipids), the significantly changed fatty acids between groups were also brought into analysis. In fatty acid metabolism, palmitic acid (16:0) was drastically decreased in the GCK knockdown group; however, linoleic acid (18:2) was increased. Additionally, in ACars, the intermediates of fatty acids β-oxidation, the overall tendency was increased, despite some ACars being decreased, and correlation analysis (Figure 2D) showed that ACars were negatively correlated with TAG and DAG, suggesting active fatty acid utilization by β-oxidation in GCK-inactivating HepG2 cells. In glycerolipid metabolism, TAG and DAG were found significantly reduced, which may be due to inhibited lipogenesis. Additionally, in glycerophospholipid metabolism, most PCs showed an increasing trend, although some individual PCs were downregulated in the GCK knockdown group. The correlation analysis (Figure 2D) indicated that PCs were negatively correlated with TAG and DAG. Therefore, the biosynthesis of PC using DAG as precursors, referring to the Kennedy pathway, were promoted in HepG2 cells when GCK was inactivated. Moreover, PEs which could also be converted to PCs via PEMT pathways were found to be significantly reduced. In addition, sphingolipid metabolism was altered as well. GM3s were remarkably downregulated while the overall trend of Cers was upregulated in the GCK knockdown group. Taken together, these observations imply that the overall biosynthesis of PCs was preferentially enhanced, while palmitic acid, TAG and DAG were significantly reduced, probably due to inhibited synthesis or increased utilization in HepG2 cells with GCK inactivation.
## 2.5. Impact of GCK Knockdown on Lipid Metabolism-Related Enzymes and Inflammatory Genes in Human and Mouse Hepatic Cell Lines
To elucidate the mechanisms of GCK knockdown on hepatic lipid metabolism, the expression of proteins responsible for de novo lipogenesis (FASN and ACC), lipolysis (ATGL), β-oxidation of fatty acids (PPARα and CPT-1) and the Kennedy pathway for PC synthesis (CHPT-1) in the human HepG2 cells and mouse AML-12 cells was investigated using Western blot. The results showed that the levels of FASN and ACC were significantly downregulated in GCK knockdown HepG2 cells (Figure 4A). Conversely, the ratio of phosphorylated-ACC/ACC was increased (though not significant), indicating an inhibited state of ACC (Figure 4A). Additionally, the expressions of ATGL, PPARα, CPT-1 and CHPT-1 were significantly upregulated in GCK knockdown HepG2 cells (Figure 4B–D). Similar alterations in the protein levels were also found in siRNA-induced AML-12 cells, except for ATGL, which remained unaffected in both 100 nM and 200 nM dose groups (Figure 5). In the siRNA groups, the levels of GCK, FASN and ACC were reduced remarkably. Additionally, the decrease in FASN displayed a dose-dependent manner in 100 nM and 200 nM siRNA. The ratio of phosphorylated-ACC/ACC and expressions of PPARα, CPT-1 and CHPT-1 were found significantly increased. Among these, the ratio of phosphorylated-ACC/ACC was only elevated in the 200 nM siRNA-treated group. Altogether, these findings showed that GCK inactivation influenced the expression of target enzymes involved in de novo lipogenesis, lipolysis, FA oxidation and PC synthesis, resulting in reduced TAG accumulation and elevated PCs in the hepatic cells.
Additionally, we also measured the inflammatory cytokines (Figure 4E) and the expressions of NLRP3 and p-NF-kB (Figure S3) in GCK knockdown HepG2 cells under HFA challenges. The levels of IL-1β and MCP-1 were significantly decreased in GCK knockdown HepG2 cells under both normal and HFA conditions. IL-6 was significantly downregulated in GCK knockdown cells in normal conditions, but only showed a reduced trend without significance ($$p \leq 0.07$$). In the HFA group, the levels of cytokines in the control and GCK knockdown cells were both increased compared to normal groups, but the expressions of IL-1β and MCP-1 were still lower in GCK knockdown cells relative to the control. Consistently, the expressions of NLRP3 and p-NF-kB were also reduced significantly in GCK knockdown cells under normal and HFA conditions. Overall, the inflammatory markers were reduced in GCK knockdown HepG2 cells in both normal and HFA conditions, indicating the potential role of GCK knockdown in preventing inflammation induced by liptoxicity.
## 3. Discussion
Glucokinase is recognized as a glucose sensor. Recent evidence has suggested that inactivation of GCK may exert cardioprotective effects in GCK-MODY by regulating the lipid profile [7,8]. In the present study, we confirmed that GCK-MODY individuals exhibited metabolically normal and cardioprotective lipid profiles (i.e., lower TG, TC and LDLs, higher HDLs) compared to T1D and T2D. We also found that in hepatic cell models (HepG2 and AML-12), GCK inactivation improved lipid deposition and inflammation under HFA intervention and affected hepatic lipid profile by regulating key enzymes involved in lipid metabolism. These findings showed the beneficial effects of GCK inactivation in hepatic lipid metabolism and uncovered the potential mechanism of the protective lipid profile and low cardiovascular risks in GCK-MODY patients.
The liver is the major metabolic organ for lipid metabolism. We demonstrated that GCK inactivation alleviated the lipid accumulation under HFA intervention in HepG2 cells. To further investigate the molecular mechanism, lipidomic analysis was applied. A decreased level of palmitic acid as well as elevated PUFAs, including linoleic acid and docosahexaenoic acid (DHA), were detected in GCK knockdown liver cells. Evidence has shown that an excess of saturated FA palmitic acid results in lipotoxicity and inflammation in the liver, while some polyunsaturated fatty acids (PUFA), including linoleic acid, elicit opposite effects which improve insulin sensitivity and alleviate inflammation [14,15,16]. Additionally, ACars, the metric of mitochondrial β-oxidation [17], were increased in GCK knockdown HepG2 cells and were negatively correlated with DAG and TAG, suggesting an enhanced FA β-oxidation. Collectively, these results suggest that GCK inactivation may improve hepatic lipotoxicity and FA accumulation via decreasing the content of deleterious saturated FA and enhancing FA β-oxidation.
Moreover, the intracellular levels of glycerolipids TAG and DAG were both reduced in GCK knockdown HepG2 cells. Accumulating evidence has indicated that glycerolipids homeostasis is linked to glycerophospholipid [18]. When PC or PE synthesis was enhanced, the conversion of DAG to TAG would be inhibited. The main pathway for the biosynthesis of PC is the Kennedy pathway, which condenses CDP-choline with DAG to produce PC by the rate-determining enzyme cholinephosphotransferase (CHPT1) [19]. Hepatic PC synthesis has been considered metabolically beneficial in that the enhancement of PC synthesis facilitates the clearance of glycerolipids, including DAGs and TAGs, and induces the production and secretion of cardioprotective HDLs [20,21]. In this study, a correlation analysis showed that PCs were negatively correlated with TAGs and DAGs in GCK-inactivated HepG2 cells, indicating increased fluxes of lipids along the TAG-DAG-PC axis, which is in line with our previous works on serum lipidomics in GCK-MODY individuals [7]. Additionally, PEs which could be converted to PC by the PEMT pathway were found significantly reduced in the GCK knockdown group. Additionally, a reduced hepatic PC/PE ratio has been reported to be associated with hepatic steatosis, inflammation and fibrosis [22,23]. Subsequently, ELISA results showed that the levels of inflammatory cytokines (IL-1β, IL-6 and MCP-1) were significantly decreased in GCK knockdown groups under normal or HFA conditions, suggesting an anti-inflammatory state in GCK knockdown HepG2 cells. Taken together, GCK inactivation exerts favorable hepatic and serum lipid profiles, probably by promoting the biosynthesis of hepatic PC, thus inducing the clearance of TAG and DAG, increasing overall circulating HDLs and preventing liver inflammation and lipid accumulation.
Furthermore, two kinds of sphingolipids displayed significant but opposite changes in GCK knockdown HepG2 cells. The levels of several Cers were elevated, while the overall expressions of GM3 were reduced in the GCK knockdown group. It is notable that various sphingolipids were associated with lipotoxicity and inflammation, and were elevated in animal and human NAFLD and diabetes [24,25,26]. In this study, the elevation in Cers and the reduction in GM3 in hepatocytes may offset each other’s effects on lipotoxicity and inflammation, which explains the lower overall inflammation level in GCK inactivation cells.
In addition, several key enzymes involved in lipid metabolic pathways were examined in HepG2 and AML-12 cell lines. FASN and ACC are rate-limiting enzymes responsible for de novo lipogenesis in the liver [27,28]. Hepatic FASN and ACC proteins were decreased in the GCK knockdown group, which might inhibit de novo lipogenesis, thus contributing to reduced palmitic acids and TAG levels. PPARα and its downstream CPT-1 promoted FA β-oxidation in the liver [29,30]. Herein, GCK inactivation significantly upregulated the expression of PPARα and CPT-1. Additionally, ATGL and CHPT-1,which catalyzes the hydrolysis of TAG into DAG [31] and mediates PC synthesis from DAG precursors [32], were found both upregulated in GCK knockdown HepG2 cells, thus inducing lipolysis of TAG and facilitating the downstream biosynthesis of PCs. These enzymes were also examined in siRNA-induced mouse AML-12 cells, and similar alterations in the protein levels were found, except for ATGL, which remained unaffected in low- and high-dose groups, which probably suggested distinct metabolism regulations between human and mouse [33], and further investigation on animal models is needed.
This study also has several limitations. In this study, we used in vitro cell models to investigate the role of GCK inactivation in liver lipid metabolism; therefore, further in vivo experiments are needed. In addition to the liver, the effects of GCK inactivation on lipid metabolism should also be validated in other tissues, including serum and adipose tissue. Additionally, this study focuses on the partial inactivation of GCK; as there are more than 600 loss-of-function mutations of GCK [34], further studies on GCK point-mutation in cells or animal models are needed to determine whether the findings of the present study are common to different mutations of GCK.
In conclusion, partial inactivation of GCK ameliorated hepatic lipid accumulation and inflammation by altering the expressions of hepatic genes involved in lipogenesis, lipolysis and β-oxidation in HepG2 and AML-12 cell models. This finding proved that reduced GCK activity optimized hepatic lipid metabolism, providing a novel mechanism for the favorable lipid profile and low cardiovascular risks in GCK-MODY patients. Glucokinase inactivation could be a potential strategy for the prevention of diabetes-related vascular complications. Further investigations are required to explore the protective and curative effects of GCK inactivation on dyslipidemia and cardiovascular complications in diabetic and nondiabetic populations.
## 4.1. Study Population and Data Collection
This study cohort comprises GCK-MODY ($$n = 33$$), T1D ($$n = 34$$), T2D ($$n = 34$$) and healthy individuals ($$n = 30$$). All participants were recruited from the outpatient clinic and inpatient ward of the endocrinology department at the Peking Union Medical College Hospital (PUMCH), Beijing, China, between January 2017 and December 2021. Demographic information and laboratory tests were collected. The study protocol was approved by the ethical standards of the Peking Union Medical College Hospital Ethics Committee and written consent was provided from all participants.
## 4.2. Cell Culture and High Fatty Acid Treatment
Human hepatocellular carcinoma cell line (HepG2) and alpha mouse liver 12 cell line (AML-12) were obtained from the Cell Bank of the Chinese Academy of Sciences (Shanghai, China). The HepG2 cells were cultured in DMEM medium with $10\%$ fetal bovine serum (FBS) and $1\%$ penicillin–streptomycin (100 U/mL penicillin 10 ug/mL streptomycin) and maintained at 37 °C with $5\%$ CO2. The AML-12 cells were grown in DMEM/F12 (1:1) supplemented with 0.005 mg/mL insulin, 0.005 mg/mL transferrin, 0.005 mg/mL selenium, $10\%$ FBS and 40 ng/mL dexamethasone (Gibco, San Diego, CA, USA) at 37 °C with $5\%$ CO2. For high fatty acid (HFA) treatment, cells were incubated with oleic and palmitic acid in the ratio 2:1 (500 μM/250 μM) or vehicle for 24 h or 48 h.
## 4.3. Lentivirus Transfection
Lentivirus-mediated GCK knockdown (hU6-GCK-CBh-gcGFP-IRES-puro, GV493) and control constructs were synthesized by Genechem (Shanghai, China). The GCK-KD group was transfected with GCK knockdown lentivirus, and the control group was transfected with empty lentivirus. HepG2 cells were cultured in 6-well plates (1 × 106/well). When the confluency reached about $60\%$ (24 h), HepG2 cells were transfected with the constructed human GCK knockdown lentivirus or GFP-expressing control vector (Genechem, Shanghai, China) at a multiplicity of infection (MOI) of 10, with 40 μL/mL infection enhancer HitransG P (Genechem, Shanghai, China) in the medium. The medium containing lentivirus was replaced with fresh medium after 12–16 h. Subsequently, the GCK-KD group were selected with 2 μg/mL puromycin for 72 h.
## 4.4. SiRNA Transfection
For the transient knockdown of GCK in AML-12 cells, mouse GCK-siRNA (GCTCAGAAGTTGGAGACTT) and negative control-siRNA were designed and synthesized by RIBOBIO Co., Ltd. (Guangzhou, China). The transfection of siRNA was facilitated by Lipofectamine 2000 reagent (Invitrogen, Waltham, MA, USA) with siRNA: Lipo2000 = 100 pmol:5 μL for each well of 6-well plates. Two concentration gradients of GCK-siRNA were used [100 nM (200 pmol siRNA) and 200 nM (400 pmol siRNA)] and treated for 24 h. Transfection efficiencies of GCK were confirmed with Western blot and enzyme activity examination.
## 4.5. Oil Red O Staining and Intracellular TAG Levels
The HepG2 cells were plated in 6-well plates. After the cells were fused to 40–$60\%$, the HepG2 cells were treated with FFA as described above. After 48 h, the cells were washed three times with PBS and fixed with $4\%$ paraformaldehyde for 30 min at room temperature. The fixed cells were washed gently with PBS and immersed in $60\%$ isopropanol for 5 min. Then, we removed the isopropanol and stained the cell with Oil Red O solution for 20 min and Mayer’s Hematoxylin Stain solution for 1 min. The excess dye was removed, and the cells were washed four times with distilled water before the microscopic observation under the bright field.
For intracellular TAG levels, after treatment with FFA for 48 h, the HepG2 cells were harvested and lysed to prepare cell lysates. Intracellular TG levels were measured using a triglyceride quantification kit (MICHY Biology, Suzhou, China) following the manufacturer’s instructions. The protein contents in the lysate were determined using the bicinchoninic acid kit (Invitrogen, Waltham, MA, USA). The output optical density was read immediately using a microplate reader at the wavelength of 505 nm. The TG content was measured as mg/mg protein.
## 4.6. Lipidomic Analysis
The lipid profiling in GCK knockdown HepG2 cells was further investigated using lipidomic analysis. As previously reported [35,36], for each sample, 480 μL of extracting solution (MTBE: MEOH = 5:1) was added for metabolites extraction. The samples were centrifuged, and the supernatants were analyzed by LC/MS. LC-MS/MS analyses were performed using an UHPLC system (Vanquish, Thermo Fisher Scientific) with a UPLC HSS T3 column (2.1 mm × 100 mm, 1.8 μm) coupled to Q Exactive HFX mass spectrometer (Orbitrap MS, Thermo) in both positive and negative electrospray ionization models. The QE mass spectrometer was used for its ability to acquire MS/MS spectra on data-dependent acquisition (DDA) mode in the control of the acquisition software (Xcalibur 4.0.27, Thermo).
## 4.7. Data Processing
The raw data files were converted to files in mzXML format using the ‘msconvert’ program from ProteoWizard. The CentWave algorithm in XCMS [37] was used for peak detection, extraction, alignment and integration, the minfrac for annotation was set at 0.5 and the cutoff for annotation was set at 0.3. Lipid identification was achieved through a spectral match using LipidBlast library, which was developed using R and based on XCMS. Orthogonal projections to latent structures discriminant analysis (OPLS-DA) were performed to identify the source of variation between groups. A permutation test repeated 200 times was conducted to ensure the model without overfitting. Variable importance for the projection (VIP) values exceeding 1.0 and p values of Kruskal–Wallis tests or Student’s t test ($p \leq 0.05$) were selected as discriminated metabolites. Correlations between lipids were analyzed by Pearson’s correlation. For the meta-analysis, the peak intensity data were converted using the Z-score transformation to represent metabolite abundance.
## 4.8. GCK Enzyme Activity Determination
The enzyme activities of GCK in cells were assessed by using commercial assay kits (Abcam, Cambridge, UK) according to the manufacturers’ instructions. GCK converts glucose into glucose-6-phosphate and produces a series of intermediates (NADPH) which could be detected by the probe, generating an intense fluorescence product (Ex/Em = $\frac{535}{587}$ nm). Briefly, the cell lysate was diluted by GCK assay buffer (Tris-HCl buffer, pH 8.0). The reaction medium included Tris-HCl, pH 7.4, MgCl2, dithiothreitol, $0.1\%$ bovine serum albumin, KCl, glucose, nicotinamide adenine dinucleotide phosphate, glucose-6-phosphate dehydrogenase and probe for NADPH. The definition of one unit of glucokinase activity is the amount of enzyme that catalyzes the release of 1.0 µmol of NADPH per minute at pH 8.0 and room temperature. The fluorescence was measured by a microplate reader (Biotek SynergyNeo2, BioTek, Vermont, VT, USA).
## 4.9. Enzyme-Linked Immunosorbent Assay (ELISA)
The supernatant of control and GCK knockdown cells was harvested after 48 h of HFA treatment and stored at −80 °C after centrifugation. The levels of Interleukin 1β (IL-1β), Interleukin 6 (IL-6) and monocyte chemotactic protein 1 (MCP-1) in supernatant were detected by ELISA kits (MULTI Science, Hangzhou, China) according to the manufacturers’ protocols.
## 4.10. Western Blotting
Cultured cells were harvested and lysed with RIPA containing $1\%$ PMSF and phosphatase inhibitors for 30 min. Total protein concentrations were determined by the bicinchoninic acid kit (Invitrogen, USA) according to the manufacturer’s instructions. Equal amounts of total protein lysates were separated by SDS-PAGE and transferred to a PVDF membrane and blocked with $5\%$ nonfat dry milk. The membranes were incubated overnight at 4 °C with the primary antibodies (Abcam, Cambridge, UK). The membranes were washed with TBST and incubated with an HRP-conjugated secondary antibody for 90 min. The blots were visualized using an enhanced chemiluminescence detection system.
## 4.11. Statistical Analysis
Statistical analysis was performed using the GraphPad Prism software (version 8.0.2, San Diego, CA, USA). Continuous variables were presented as mean ± SD or median (interquartile range), as appropriate. For two groups, an unpaired two-tailed t-test was performed for intergroup comparisons. For more than two groups, one-way analysis of variance (ANOVA) was used to assess statistically significant differences. Differences between groups were considered statistically significant when p values ≤ 0.05. |
# Associations of Cooking Skill with Social Relationships and Social Capital among Older Men and Women in Japan: Results from the JAGES
## Abstract
The health benefits of social relationships and social capital are well known. However, little research has examined the determinants of social relationships and social capital. We examined whether cooking skill was associated with social relationships and social capital in older Japanese people. We used 2016 Japan Gerontological Evaluation *Study data* on a population-based sample of men and women aged ≥ 65 years ($$n = 21$$,061). Cooking skill was assessed using a scale with good validity. Social relationships were evaluated by assessing neighborhood ties, frequency and number of meetings with friends, and frequent meals with friends. Individual-level social capital was evaluated by assessing civic participation, social cohesion, and reciprocity. Among women, high-level cooking skill was positively associated with all components of social relationships and social capital. Women with high-level cooking skill were 2.27 times ($95\%$ CI: 1.77–2.91) more likely to have high levels of neighborhood ties and 1.65 ($95\%$ CI: 1.20–2.27) times more likely to eat with friends, compared with those with middle/low-level cooking skill. Cooking skills explained $26.2\%$ of the gender difference in social relationships. Improving cooking skills may be key to boosting social relationships and social capital, which would prevent social isolation.
## 1. Introduction
Globally, there were 901 million people aged 60 years or older in 2015, and this number is projected to rise to 1.4 billion by 2030 [1]. In older age, social networks may decline because of retirement, adult children’s independence, and bereavement after the death of spouses or friends. Socially isolated older people are at increased risk of several detrimental health outcomes including mortality [2], dementia [3], and poor mental health [4]. Therefore, it is important to find modifiable factors that foster social relationships among older adults.
Social relationships are measured in a variety of ways, with three main aspects being used in research on health: social network, social activity, and social support [5]. Social networks and social activity represent structural aspects of social relationships, whereas social support represents functional aspects of social relationships [5]. Social network covers network size (number of members) and density (frequency of contact between members), social activities are represented by social participation and social engagement, and social support refers to a perception of the availability of support from members of the social network [5]. In addition to social relationships, social capital is another important health-promoting concept. Social capital is described as resources that people can receive through their social networks, although there is no universally agreed definition of social capital [6,7].
The health benefits of both Individual- and community-level social capital have been shown in many epidemiological studies [6,7,8,9]. However, little research has examined the determinants of social capital. Recently, gender inequality in social capital has been reported, with women having higher levels of some social capital components, such as reciprocity and bridging, compared with men [10,11]. Compared with men, women tend to invest more in social relationships and building intimate emotional relationships [12,13]. In a study of older adults in Japan and England, women more often met with friends than did men [14,15]. However, the reasons for differences in social relationships between men and women are still unknown. In addition to gender, ethnicity and socioeconomic status (SES) have been reported as possible determinants of social capital [10,11,16], but these factors (i.e., gender, ethnicity, and SES) are difficult or impossible to modify through intervention. To boost social capital, modifiable factors determining social relationships and social capital should be identified.
Activity-related food has been linked with social activity from an evolutionary perspective [17]. Meal preparation ability may contribute to fostering not only family relationships but also social relationships with neighbors and friends. A qualitative study among rural older adults in the United States reported that most older adults gave or received some kind of food, especially cooked foods and garden products, and women were more likely to receive food gifts than men [18]. This food sharing was valued as a way to maintain reciprocity in social relations and to create a feeling of community membership [18]. In Japan, there is a culture of osusowake, which refers to the mutual exchange of foodstuffs between neighbors. This culture may contribute to strengthening community networks through supporting cultural activities including local festivals and seasonal events [19]. A systematic report on the benefits of cooking interventions showed that community kitchen programs had a positive influence on socialization [20]. Higher levels of cooking skills have been found to increase the frequency of cooking and confidence in cooking [21,22,23,24,25]. Thus, cooking skill may increase opportunities to build better social relationships with others, such as sharing food with neighbors and attending local cultural activities.
Cooking skills represent a basic living ability that contributes to better diet quality. Several studies have shown the dietary benefits of cooking skills, such as higher consumption of vegetables and fruits and lower consumption of prepared meals, convenience foods, and ultra-processed foods [21,25,26,27]. However, little is known about the importance of cooking skills beyond dietary outcomes. Although one’s mother is the most common source for learning cooking skills, people also learn from partners, cookbooks, television shows, and cooking classes [23,28]. Thus, interventions are possible even in older age. In fact, because retirement allows more time to cook, it is reasonable for older people to newly start to learn cooking skills.
The aim of this study was to examine the associations of cooking skills with social relationships and social capital among older adults. First, to identify social relationships that can be modified through intervention, we examined the association of cooking skills with social relationships with neighbors and friends rather than with relatives. Specifically, the investigated social relationships included neighborhood ties, frequency of meetings with friends, number of meetings with friends, and shared meals with friends. Next, we examined the associations between cooking skills and individual-level social capital, which included civic participation, social cohesion, and reciprocity [29]. Finally, we examined gender differences in social relationships and social capital, as well as the mediating role of cooking skills in the associations of gender with social relationships and social capital.
## 2.1. Study Design and Participants
We used data from the Japan Gerontological Evaluation Study (JAGES), which was carried out in 39 municipalities across Japan in 2016. The study targeted community-dwelling older adults without functional disabilities, defined as not being certified as eligible to receive long-term public care insurance system services [30]. From October 2016 to January 2017, self-report questionnaires were mailed to 279,661 adults aged ≥ 65 years, and 196,438 individuals returned the questionnaire (response rate: $70.2\%$). The survey was conducted using random sampling in 22 large municipalities and was administered to all eligible residents in 17 small municipalities [25]. One-eighth of the target sample ($$n = 22$$,219) were randomly selected to receive the survey module inquiring about cooking skills. Of the 21,061 participants who had information on both gender and cooking skills and did not report any limitations in activities of daily living, those who had information on each outcome variable were included in the analysis; thus, the analytic sample differs depending on the outcome: $$n = 20$$,799 for neighborhood ties, $$n = 20$$,477 for frequency of meetings with friends, $$n = 20$$,445 for the number of meetings with friends, $$n = 21$$,061 for shared meals with friends, $$n = 15$$,631 for civic participation, $$n = 20$$,424 for social cohesion, and $$n = 20$$,224 for reciprocity. Participants were informed that participation in the study was voluntary and that completing and returning the questionnaire indicated their consent to participate in the study.
## 2.2. Social Relationships
Neighborhood ties, frequency of meetings with friends, number of meetings with friends, and frequent shared meals with friends were evaluated to assess social relationships. All components of social relationships were assessed using the self-report questionnaire. For neighborhood ties, participants were asked, “What kind of interactions do you have with people in your neighborhood?” The four response options were [1] mutual consultation, lending and borrowing daily commodities, and cooperation in daily life; [2] standing and chatting frequently; [3] no more than exchanging greetings; and [4] none, not even greetings [29,31]. We classified the participants as having high (response 1), middle (response 2), or low (response 3 or response 4) levels of ties, collapsing the two response categories because only $2.27\%$ of the participants reported having no interactions with people in their neighborhood (response 4). The frequency of meetings with friends was assessed using the following question: “How often do you see your friends?”. The six response options were [1] ≥4 times/week; [2] 2–3 times/week; [3] 1 time/week; [4] 1–3 times/month; [5] a few times/year; [6] never [29]. In this analysis, the scores of 4, 2.5, 1, 0.5, 0.125, and 0 (times/week) were assigned to response categories 1, 2, 3, 4, 5, and 6, respectively, and the resulting variable was treated as continuous. The number of meetings with friends was assessed using the following question: “How many friends/acquaintances have you seen over the past month?”. The five response options were [1] ≥10; [2] 6–9; [3] 3–5; [4] 1–2; [5] 0 [29]. In this analysis, the scores of 10, 7.5, 4, 1.5, and 0 (persons/month) were assigned to responses 1, 2, 3, 4, and 5, respectively, and the resulting variable was treated as continuous. Frequent shared meals with friends were assessed using the following question: “Who do you usually have meals with?”. The possible responses were no one, spouse, children, grandchildren, friends, and other [32]. Multiple responses were possible. We defined participants who selected “friends” as eating with friends.
## 2.3. Social Capital
Individual-level social capital was evaluated by assessing civic participation, social cohesion, and reciprocity using a validated scale to measure community-level social capital [29]. These variables were assessed using the self-report questionnaire, and details of this assessment have been described elsewhere [29]. For civic participation, we calculated the number of groups in which a respondent participated once or more [29]. Social cohesion was assessed using the following questions: “Do you think people living in your area can be trusted in general?” ( community trust), “Do you think most people in your community offer assistance to others?” ( norm of reciprocity), and “How strong is your residential place attachment?” ( community attachment). Responses were rated on a five-point scale ranging from strongly trusted, agree strongly, or strongly attached to not at all. We calculated the number of items on which the participant strongly or moderately agreed [29]. Reciprocity was assessed using the following questions: “Do you have someone who listens to your concerns and complaints?” ( received emotional support), “Do you listen to someone’s concerns and complaints?” ( provided emotional support), and “Do you have someone who looks after you when you are sick for a few days?” ( received instrumental support). The possible responses were no one, spouse, children, sibling/relative/parent/grandchildren, neighbors, friends, and other. Multiple responses were allowed. To explore the type of reciprocity that can be changed through intervention, we calculated the number of items for which the respondent selected neighbors, friends, or other.
## 2.4. Cooking Skills
Cooking skills were assessed using a cooking skills scale designed with consideration of basic Japanese cooking methods and typical meals; details of this assessment have been described elsewhere [25]. This scale had appropriate internal consistency (Cronbach’s α = 0.96) and notable discriminant validity, with women (experienced food preparers) scoring significantly better than men (food preparation novices) [25]. The scale consisted of seven items: [1] overall cooking skills; [2] able to peel fruits and vegetables; [3] able to boil eggs and vegetables; [4] able to grill fish; [5] able to make stir-fried meat and vegetables; [6] able to make miso soup; and [7] able to make stewed dishes. Participants were asked to evaluate their own cooking skills on a six-point scale ranging from unable (=0) to very well (=5). We calculated the mean of these seven items and divided the result into three categories: high (score of >4.0), middle (score of 2.1–4.0), and low (score of ≤2.0) [25]. For women, the middle group and the low group were combined into one category because the low group was quite small ($1.2\%$).
## 2.5. Covariates
Covariates were assessed using the self-report questionnaire (Table S1). We included education, current annual household income, and marital status as socio-demographic characteristics [25]. For health status, we asked whether the participants were currently under medical treatment for any of the following conditions: cancer, heart disease, stroke, hypertension, diabetes mellitus, and hyperlipidemia. Furthermore, depressive symptoms were assessed using the Geriatric Depression Scale [33]. To account for personality aspects such as curiosity regarding cooking, which may be directly associated with social relationships, as a sensitivity analysis, we controlled for whether the participants talked with young people [34] and the participants’ willingness to take on a leadership role in a community activity. Participants with missing data on covariates were included in the analysis as dummy variables.
## 2.6. Statistical Analysis
The analyses were stratified by gender because different associations between cooking skills and dietary behaviors have been reported for men and women [25]. First, after stratifying the sample by gender, we tested the differences using the chi-square test for categorical variables and the t-test or ANOVA for continuous variables. Next, participants were stratified by their level of cooking skills, and differences were tested using the chi-square test for categorical variables and the t-test or ANOVA for continuous variables. Second, for neighborhood ties, we used multinomial logistic regression to calculate adjusted relative risk ratios (RRRs) with $95\%$ CIs of high-level and middle-level ties, with low-level ties as the reference category. For the frequency and number of meetings with friends and social capital (civic participation, social cohesion, and reciprocity), we used multivariate linear regression models, adjusting for potential confounders. For frequent shared meals with friends, we used logistic regression to calculate adjusted odds ratios with $95\%$ CIs of eating meals with friends. The models were adjusted for the following potential confounding factors: age, socio-demographic characteristics (education, annual normalized household income, and marital status), and health status (medical treatment of cancer, heart disease, stroke, hypertension, diabetes mellitus, and hyperlipidemia, as well as depressive symptoms).
Additionally, we conducted structural equation modeling (SEM) analysis to explore the mediating role of cooking skills in the associations of gender with social relationships and social capital. In the SEM analysis, social relationships and social capital were treated as latent variables estimated from neighborhood ties, frequency of meetings with friends, number of meetings with friends, frequent shared meals with friends, civic participation, and reciprocity ($$n = 15$$,207 because of missing values on the variables used to estimate the latent variables). Cooking skill, operationalized as the mean value of the seven cooking skill items, was treated as a continuous variable. Overall model fit was tested using the comparative fit index, the root mean square error of approximation, and the standardized root mean square residual. All analyses were conducted using Stata, Version 15 (Stata Statistical Software: Release 15. College Station, TX, USA: StataCorp LP).
## 3. Results
The participants’ characteristics are summarized in Table S1. Women were about twice as likely as men to have a high level of neighborhood ties and to eat with their friends.
The associations between cooking skills and social relationships are shown in Table 1. The interaction effect between cooking skills and gender was significant: the relationships with all components of social relationships were higher among women than among men ($p \leq 0.05$ for the interaction). Women with a high level of cooking skills were 2.27 times ($95\%$ CI: 1.77–2.91) more likely to have a high level of neighborhood ties and 1.65 ($95\%$ CI: 1.20–2.27) times more likely to eat with friends, compared with women with middle/low-level cooking skills. High-level cooking skill was associated with a higher frequency and number of meetings with friends. Men with high-level cooking skills were 1.84 times ($95\%$ CI: 1.46–2.33) more likely to have a high level of neighborhood ties, compared with men with low-level cooking skills. For men, high-level cooking skill was associated with a higher frequency and number of meetings with friends. These associations remained significant after adjusting for prosocial behavior-related personality (Table S2).
The associations between cooking skills and social capital are shown in Table 2. The interaction effect between cooking skills and gender was significant ($p \leq 0.05$ for the interaction). For women, high-level cooking skill was positively associated with all components of social capital, whereas the relationship between high-level cooking skill and social cohesion was non-significant for men. These associations remained significant after adjusting for prosocial behavior-related personality (Table S3).
Compared with men, women had higher levels of social relationships and social capital except for social cohesion (Tables S4 and S5). Women were 3.01 times ($95\%$ CI: 2.76–3.29) more likely to have a high level of neighborhood ties and 2.47 times ($95\%$ CI: 2.20–2.78) more likely to eat with friends, compared with men. Women had a higher frequency and number of meetings with friends (coefficient = 0.34, $95\%$ CI: 0.30–0.38 and coefficient = 0.67, $95\%$ CI: 0.57–0.78), more civic participation (coefficient = 0.23, $95\%$ CI: 0.19–0.27), and higher reciprocity (coefficient = 0.11, $95\%$ CI: 0.10–0.13). However, women also had lower social cohesion compared with their male counterparts (coefficient = −0.04, $95\%$ CI: −0.07 to −0.01).
Figure 1 shows the result of the SEM analysis for the association between gender and social capital including social relationships except for social cohesion. This SEM analysis demonstrated good model fit (likelihood-ratio test of the model, chi-square = 208.4, $p \leq 0.001$; comparative fit index = 0.991; root mean square error of approximation = 0.03; standardized root mean square residual = 0.016). The association between gender and social capital was partially mediated by cooking skill (from gender to cooking skill: standardized coefficient = 0.570, $p \leq 0.001$; from cooking skill to social relationships including social capital: standardized coefficient = 0.152, $p \leq 0.001$). The indirect effect was $26.2\%$ of the total effect.
## 4. Discussion
To our knowledge, this is the first study to examine cooking skills as a modifiable determinant of social relationships and social capital. We found that, among older adults in Japan, a high level of cooking skill was positively associated with social relationships and social capital, and we identified significant interaction effects between cooking skill and gender on social relationships and social capital. We confirmed that women had higher levels of social relationships and social capital than men, and these associations were partially mediated by cooking skill.
Given that food plays a central role in connecting people in traditional Japanese culture [35], our results are plausible. Special meals for many rituals and celebrations throughout the year are handed down in various forms throughout Japan [36]. For events, people prepare special meals called gyoujisyoku and also hold “after parties” following the events [35]. Even outside of celebrations, many seasonal events connected with locally produced foods are held in communities [35]. For these events, people do not only eat together—they also make meals together, which strengthens friendships and cohesiveness [35]. Therefore, cooking skills are indispensable for these traditional and local events, and it is conceivable that people with higher levels of cooking skills will have more opportunities to play an important role in the community. We also found significant interaction effects between cooking skills and gender on social relationships and social capital: women are more likely to benefit from social relationships through a high level of cooking skills. This finding may be explained by women cooking more frequently than men [25], creating more opportunities for women to use their cooking skills.
In line with previous studies [10,15,18], we found that women were more likely than men to have strong social relationships and social capital. A nationally representative study in Ukraine showed that the gender difference in bonding social capital, which corresponds to the frequency/number of meetings with friends in our study, was explained by age and income [10]. Using SEM analysis, we found that cooking skill mediated $26\%$ of the association between gender and social capital. Therefore, we have newly identified cooking skills as a factor contributing to explaining the gender differences in social capital.
Among the components of social capital, social cohesion was found to be weakly associated with cooking skills for women but not for men. In contrast to the other social relationships and social capital, social cohesion was the only variable that was lower in women than in men (Table S5). Social cohesion, which is categorized as cognitive social capital rather than structural social capital, may have determinants that differ from those of other aspects of social relationships. A study conducted in the Netherlands showed that perceptions that one’s neighborhood is unsafe or unattractive and low SES were associated with low social cohesion but not with social networks (e.g., visiting neighbors, asking neighbors for advice) [37]. A study in the United States showed that neighborhood safety and SES were positively related to social cohesion [38]. Low-SES groups tend to be more pessimistic and express more feelings of unsafety and neighborhood problems compared with those with higher SES [37,39]. Therefore, neighborhood safety and SES may play key roles in cognitive social capital.
This study had several limitations. First, common method bias may have occurred because cooking skills, social relationships, and social capital were assessed via the self-report questionnaire. To address this common source bias, as a sensitivity analysis, we adjusted for mental health and prosocial behaviors, which are related to the tendency to respond to the questions. It would be useful to also collect information from a second person, such as a family member or experienced food preparer who could evaluate the participants’ cooking skills. Second, gender bias may have occurred because men tend to have higher self-esteem and more positive evaluations of their own abilities than women [40]. Men may overestimate their own cooking skills and women may underestimate theirs, in which case their relationship to social relationships and social capital may lead to underestimation. Third, there may be unmeasured confounding factors, such as regional characteristics. For example, in communities where cooking classes and events involving meal preparation are popular, residents will have more opportunities to build social relationships as well as improve their cooking skills. Regional characteristics may also influence participants’ evaluation of social capital. For example, participants may not feel as engaged in society as much as they should if they belong to an active community, and vice versa. In the future, indicators of community characteristics will need to be considered. Forth, because the JAGES survey study sites were not randomly selected, the generalizability of our findings to other populations in *Japan is* limited. Additionally, the cooking skill scale in this study is limited to Japanese culture. Therefore, the results of the study may be applicable only within Japan. In the future, cooking skill scales appropriate for each culture will need to be developed to evaluate aspects of health promotion in other countries. Finally, because this study was cross-sectional, causality could not be established: reverse causation is possible, and unmeasured factors may confound the examined associations. For example, having a low level of social relationships with neighbors/friends may reduce the chances of learning cooking skills, which may lead to poor cooking skills. However, more than half of the adult respondents learned most of their cooking skills from their mothers when they were teenagers [28].
## 5. Conclusions
Our study has produced novel findings regarding the associations of cooking skills with social relationships and social capital. Considering the health benefits of social relationships and social capital, our study is of great public health importance because it has demonstrated the importance of cooking skill, a factor that can be modified through intervention to improve social relationships and social capital. |
# Systematic analysis of myocardial immune progression in septic cardiomyopathy: Immune-related mechanisms in septic cardiomyopathy
## Abstract
### Background
The immune infiltration and molecular mechanisms underlying septic cardiomyopathy (SC) have not been completely elucidated. This study aimed to identify key genes related to SC and elucidate the potential molecular mechanisms.
### Methods
The weighted correlation network analysis (WGCNA), linear models for microarray analysis (LIMMA), protein-protein interaction (PPI) network, CIBERSORT, Kyoto Encyclopedia of Genes and Genomes pathway (KEGG), and gene set enrichment analysis (GSEA) were applied to assess the key pathway and hub genes involved in SC.
### Results
We identified 10 hub genes, namely, LRG1, LCN2, PTX3, E LANE, TCN1, CLEC4D, FPR2, MCEMP1, CEACAM8, and CD177. Furthermore, we used GSEA for all genes and online tools to explore the function of the hub genes. Finally, we took the intersection between differential expression genes (DEGs) and hub genes to identify LCN2 and PTX3 as key genes. We found that immune-related pathways played vital roles in SC. LCN2 and PTX3 were key genes in SC progression, which mainly showed an anti-inflammatory effect. The significant immune cells in cardiomyocytes of SC were neutrophils and M2 macrophages.
### Conclusion
These cells may have the potential to be prognostic and therapeutic targets in the clinical management of SC. Excessive anti-inflammatory function and neutrophil infiltration are probably the primary causes of SC.
## 1. Introduction
Researchers recognize sepsis as a life-threatening condition that is caused by a dysregulated host response to infection [1]. It is the immune response of the organism to pathogens and immunogenic substances, causing autoimmune damage. Sepsis is common in severe health conditions, and the development of sepsis may lead to septic shock and multiple organ dysfunction syndromes (MODS), which once occurs, mortality can be up to 28–$56\%$. The heart is the main target organ of sepsis, and more than $50\%$ of patients with severe sepsis have myocardial dysfunction, which is called septic cardiomyopathy (SC) [2]. However, due to a lack of uniform diagnostic criteria, the prevalence of SC varies in different reports. Beesley et al. [ 3] reported the incidence of myocardial dysfunction in sepsis patients as ranging from 10 to $70\%$.
Inflammatory responses and immune cell infiltration widely exist in many types of cardiomyopathy. For example, in heart tissue with diabetic cardiomyopathy, inflammatory responses have been found to be significantly activated, as manifested by infiltration of multiple immune cells, increased cytokines, and multiple chemical factors [4]. Similar results have also been confirmed in animal models [5, 6]. In the state of diabetes mellitus, macrophages may induce tissue infiltration, transform into the proinflammatory phenotype of M1, and be associated with the activation of inflammatory signaling pathways in leukocytes [7]. Thus, inflammatory responses are closely related to cardiac function. Myocardial dysfunction can increase sepsis-induced mortality, but no reports have elucidated the underlying pathophysiological mechanisms of SC. The molecular mechanism that may be involved in the pathogenesis of SC remains to be studied, and there is a need to screen potential targets for the treatment of SC. Among the pathogenic factors contributing to SC, the imbalanced inflammatory responses caused by sepsis directly correlate with the dysfunction of myocardial cells. Previous studies have reported that sepsis begins with the host immune system’s response to invasive pathogens, eventually leading to activation of the innate immune response [8]. Bacterial products, including endotoxins and exotoxins, can directly or indirectly stimulate various target cells, including monocytes, polymorphonuclear neutrophils, or endothelial cells, thereby causing inflammation [9]. Endotoxins and exotoxins, through varied signal transduction pathways, activate both positive and negative feedback loops within the immune system. Sepsis-induced dysregulation of the normal immune response can lead to a variety of harmful effects, including SC. Therefore, a thorough understanding of the molecular immune mechanism involved in the pathogenesis of SC could be one of the breakthroughs that may help in the treatment of SC in the future.
In this study, we downloaded the gene expression profile (GSE79962) deposited by Matkovich et al. [ 10] from Gene Expression Omnibus (GEO) databases to uncover further the biomarkers associated with SC development and progression. We aimed to identify key genes related to SC, as well as to further elucidate the potential molecular mechanisms through bioinformatics analysis.
## 2.1. Data
We downloaded microarray data GSE79962 from the NCBI Gene Expression Omnibus database (GEO).1 *The data* involved ischemic heart disease (IHD, 11 samples), non-failing heart (control, 11 samples), dilated cardiomyopathy (DCM, 9 samples), and septic cardiomyopathy (SC, 20 samples). We chose all the samples in the study. We downloaded the annotation information of the microarray, GPL6244, Affymetrix Human Gene 1.0 ST Array [transcript (gene) version]. We preprocessed the raw data using R version 3.6.0. The analysis workflow is presented in Figure 1.
**FIGURE 1:** *Workflow used for bioinformatics analyses.*
## 2.2. WGCNA network construction
We constructed co-expression networks using the weighted correlation network analysis (WGCNA) package in R [11]. We did not filter genes. We imported gene expression values into WGCNA to create co-expression modules using the automatic network construction function blockwiseModules with default settings. We set the power value by the condition of scale independence as 0.9. The unsigned TOMType mergeCutHeight was 0.25, and the minModuleSize was 50.
## 2.3. Module and gene selection
To find biologically or clinically significant modules for SC, module eigengenes were used to calculate the correlation coefficient with samples. We calculated the intramodular connectivity (function softConnectivity) of each gene. We thought genes with high connectivity tended to be hub genes which might have essential functions. We imported the positive correlation modules into Cytoscape software (version 3.7.1), using the MCODE plugin, setting the degree cutoff at no less than 10, to screen the key sub-modules.
## 2.4. Functional analysis of module genes
Because the three modules were all positively correlated to SC, we imported all genes from these three sub-modules into the STRING database (version 11.0).2 We obtained the results of gene ontology (GO) enrichment analysis, Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis, and protein-protein interaction (PPI). Significantly enriched GO terms and pathways in genes in a module compared to the background were defined by hypergeometric tests and a threshold of a minimum required interaction score 0.700 (high confidence). After that, we imported a PPI network into Cytoscape software (version 3.7.1), using the cytoHubba plugin to screen the hub genes by 12 methods. We performed functional analysis and disease prediction of hub genes through the online tools Metascape database3 and ToppGene database.4 We used the “limma” package of the R language to identify the differential expression genes (DEGs) between the SC and control groups, according to the adjusted p-value < 0.05 and |logFC| > 0.7. We screened the shared genes between DEGs and hub genes as key genes.
## 2.5. GSEA and immune infiltration analysis using the CIBERSORT method
Meanwhile, we performed the gene set enrichment analysis (GSEA) with hallmark gene sets using the 11 control samples and 20 SC samples according to the default values. The criteria of significant results were set as normal enrichment score (|NES| > 1), normal p-value < 0.05, and FDR < 0.25. To characterize the immune cell subtypes in SC progression, we applied the CIBERSORT estimate software5 to quantify the immune cell fractions for the gene expression matrix derived from SC samples. Then, we identified the correlation between the hub genes and immune cell subtypes.
## 3.1. Screening the key modules in the network
Expression data of 18,818 genes in the 51 samples were screened. These samples included SC, control (non-failing donor heart), ischemic heart disease (IHD), and dilated cardiomyopathy (DCM). They were used to construct the co-expression modules with the WGCNA algorithm. Following the data preprocessing, we set the power value. The power value was four when the condition of scale independence was 0.9 (Supplementary Figure 1). We clustered genes into 26 correlated modules. We tried to identify sample-associated co-expression modules using WGCNA (Figures 2A, B). At last, we got 18 co-expression modules, which were illustrated in the branches of a dendrogram with different colors. We focused just on the SC group. Therefore, we chose the dark green module (Pearson cor = 0.69, $$p \leq 3$$e–8), blue module (Pearson cor = 0.83, $$p \leq 3$$e–14), and orange module (Pearson cor = 0.74, $$p \leq 4$$e–10), as moderately or more positively related with SC. The number of genes in the three modules was 2652 (blue), 73 (dark green), and 2,041 (orange). The information about the genes in the three modules is listed in Supplementary Table A1. The relationship of module membership to gene significance in the modules showed was cor = 0.9 and $p \leq 1$e-200 in the blue module (Figure 2C), cor = 0.9 and $$p \leq 3.3$$e-34 in the dark green module (Figure 2D), and cor = 0.79 and $p \leq 1$e-200 in the orange module (Figure 2E). We imported the three modules into the Cytoscape software and used the MCODE to screen the three sub-modules, setting the criteria of the MCODE score to more than 10. After screening, we got 58 genes from the orange module (average MCODE score = 33.98), 37 genes from the blue module (average MCODE score = 14.23), and 53 genes from the dark green module (average MCODE score = 41.16) (Supplementary Table A2).
**FIGURE 2:** *Overview of WGCNA network construction of all genes (A) Gene modules’ dendrogram plots of all genes; (B) module-trait relationships of four groups in 18 modules. (C–E) Module membership vs. gene significance between three significant modules, including blue module (Pearson cor = 0.9, p < 1e-200), dark green module (cor = 0.9, p = 3.3e-34), and orange module (Pearson cor = 0.79, p < 1e-200); (F) the bubble diagram showing the GO (biological process, BP) function enrichment of genes in sub-modules. The size represents the gene counts, and node colors show the gene expression negative Log10_FDR (false discovery rate).*
## 3.2. Functional enrichment analysis of genes in critical modules
We imported all of the screened genes (a total of 148 genes) from the three modules into the STRING database to construct a PPI network. Meanwhile, we got GO enrichment analysis results according to a false discovery rate (FDR) < 0.05.
We obtained the top 10 biological process (BP) terms, including neutrophil activation (FDR = 1.29E–22), neutrophil degranulation (FDR = 2.65E–22), regulated exocytosis (FDR = 6.34E–22), exocytosis (FDR = 2.02E–21), leukocyte mediated immunity (FDR = 2.20E–21), leukocyte activation involved in immune response (FDR = 9.78E–21), immune response (9.07E–20), leukocyte activation (FDR = 1.86E–18), secretion (FDR = 1.86E–18), and cell activation (FDR = 2.56E–18) (Figure 2F). We obtained GO terms of the top 10 cellular components (CC), consisting of secretory granule (FDR = 2.49E–19), cytoplasmic vesicle part (FDR = 1.34E–15), cytoplasmic vesicle (FDR = 4.74E–13), vesicle (FDR = 4.74E–13), tertiary granule (FDR = 4.74E–13), specific granule (FDR = 3.63E–11), secretory granule lumen (FDR = 6.73E–11), secretory granule membrane (FDR = 1.42E–10), ficolin-1 rich granule (FDR = 1.82E–10), and endomembrane system (FDR = 1.33E–09) (Figure 3A). Meanwhile, we harvested GO terms of 8 molecular functions (MFs), comprising protein binding (FDR = 0.00028), cytokine binding (FDR = 0.00028), enzyme binding (FDR = 0.0018), signaling receptor binding (FDR = 0.0124), CXCR chemokine receptor binding (FDR = 0.0297), protease binding (FDR = 0.0361), cytokine receptor activity (FDR = 0.0361), and pantetheine hydrolase activity (FDR = 0.0361; Figure 3B). Regarding the KEGG pathway enrichment, the genes were significantly enriched in pathways including neutrophil degranulation (FDR = 4.58E–23), innate immune system (FDR = 1.30E–16), immune system (FDR = 1.30E–16), signaling by interleukins (FDR = 8.52E–06), cytokine signaling in immune system (FDR = 0.00025), and others (Figure 3C and Supplementary Table A3).
**FIGURE 3:** *Analysis of gene ontology (GO) function, Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways, and protein-protein interaction (PPI) network of genes in septic cardiomyopathy (SC)-related modules. (A–C) The bubble map showing the GO function (cellular component, CC, and molecular function, MF) and KEGG pathways were constructed by the STRING database. The sizes represent negative Log10 (FDR). (D) The gene PPI network was also constructed based on the STRING database. (E) The plots showing the top 10 higher degree hub genes for SC.*
We used Cytoscape software to visualize the PPI network from the STRING database, which is shown in Figure 3D. Through the cytoHubba plugin, we exported the results of 12 algorithms and screened the top 10 genes as hub genes. These included LRG1, LCN2, PTX3, ELANE, TCN1, FPR2, CLEC4D, MCEMP1, CEACAM8, and CD177 (Figure 3E and Supplementary Table A4).
## 3.3. GSEA and immunocyte infiltration analysis
The results of GO and KEGG enrichment analysis also indicated that immune and inflammatory events played a vital role in cardiac tissue in SC. Furthermore, the results of GSEA with hallmark gene sets between the control and SC indicated significant differences in the myocardium of SC, such as TGF-beta signaling, TNF-alpha signaling through NF-kappa B, inflammatory response, and TNF and P53 pathways (we set the criteria |NES| > 1, NOM $p \leq 0.01$, FDR < 0.25; Figure 4A and Table 1). To characterize the immunocyte status of cardiac tissues in SC, we performed a tissue immune infiltration analysis. We found that there was no significant difference in 22 immune subtypes in the myocardium of SC and the control, except for M2 macrophages (Wilcoxon, $$p \leq 0.032$$) and neutrophils (Wilcoxon, $$p \leq 0.00064$$; Figures 4C, D). A heatmap of immune cell subtypes is illustrated in Figure 4B.
**FIGURE 4:** *Immunocyte infiltration analysis and potential immunocyte subtype detection. (A) This diagram shows the immune-related pathways by gene set enrichment analysis (GSEA) analysis. (B) This heatmap shows the immunocyte infiltration difference between SC and control heart samples. (C,D) Box plots presenting significantly infiltrated immunocyte subtypes, neutrophils, and M2 macrophages. (E) This correlated heatmap shows the relationship between immunocytes and hub genes. (F) Intersection between DEGs and hub genes, identifying the key genes, LCN2, and PTX3.* TABLE_PLACEHOLDER:TABLE 1
## 3.4. Identification of the relationship between hub genes and key immune cell subtypes in SC
We found that two subtypes of immune cells in the infiltration were significant. These were neutrophils and M2 macrophages. Importantly, the neutrophils had the highest relative infiltration value. We analyzed the relationship between the 10 hub genes in immune-related pathways and the two immune cell subtypes and found the neutrophils had positive correlations with all hub genes, especially with CLEC4D, CD177, LCN2, and TCN1, but M2 macrophages had negative correlations with all hub genes and neutrophils; Figure 4E). This suggests that to some extent, neutrophils can promote the progression of SC in the myocardium, but that M2 macrophages do the opposite.
## 3.5. Investigating the functional role of hub genes and identification of key genes
To further understand how the hub genes were correlated with SC, we applied the Metascape online database to explore their biological functions. The results of the top five GO term enrichments in hub genes included neutrophil degranulation, neutrophil activation involved in immune response, neutrophil activation, neutrophil mediated immunity, and granulocyte activation. The results showed all hub genes are involved in the neutrophil and immune response (Table 2). We explored enriched pathways of hub genes involved in neutrophil degranulation, the innate immune system, antimicrobial peptides, and similar topics (Supplementary Table A5). We also predicted diseases related to hub genes using the ToppGene database. These diseases included sepsis, immune neutropenia, septicemia, and others (Table 3 and Supplementary Table A5). We found 467 DEGs between the SC group and the control (Supplementary Table A6). Finally, we took the intersection between DEGs and hub genes to identify LCN2 and PTX3 as key genes (Figure 4F). As for the key genes, we compared the expression levels of LCN2 and PTX3 in different kinds of myocardial injury from these samples. We found LCN2 and PTX3 had significantly higher expression in the SC group (Supplementary Figure 2).
## 4. Discussion
In this study, we screened key modules in SC by analyzing a public dataset (GSE79962). Compared with the control group, IHD group, and DCM group, a total of three modules were positively correlated with SC. These included the orange module, blue module, and dark green module. We chose 148 genes from the three modules using the MCODE plugin of Cytoscape for functional enrichment analysis. Notably, we found most genes in the three modules were enriched in the immune response, leukocyte activation, neutrophil degranulation, and similar events.
Regarding the KEGG pathway enrichment, the genes were significantly enriched in immune-related pathways, including neutrophil degranulation, the innate immune system, the immune system, and cytokine signaling in the immune system. The results of GSEA with hallmark gene sets between control and SC indicated that significantly different pathways in the myocardium of SC were immune-related. Neutrophils degranulated during phagocytosis, releasing a series of lysosomal enzymes, which caused damage to blood vessels and surrounding tissues, leading to cardiac dysfunction [12]. Sepsis leads to an auto-amplifying cytokine production known as the cytokine storm. At the same time, activation of Toll-like receptors (TLRs) releases a large number of inflammatory cytokines, such as TNF, IL-1, interferon regulatory factor 3 (IRF3) [13]. The activation of these immune responses leads to damage of myocardial tissue and cardiac dysfunction. Previous studies have reported that TLRs can attenuate SC through activation of innate immune and inflammatory responses [14, 15]. TLR is a kind of pattern recognition receptor that can activate the innate immunity response, playing a critical role through activation of NFκB which is an important transcription factor controlling the expression of inflammatory cytokine genes [16]. TLRs play a major role in the pathophysiology of cardiac dysfunction during sepsis [14]. In an animal model, TLR2 was found that can influence cardiac function through deteriorating sarcomere shortening [17, 18]. TLRs deficiency attenuated cardiac dysfunction in a mouse model through inhibition of sepsis-induced activation of TLR4 mediated NF-κB signaling pathways, and prevention of the macrophage and neutrophil infiltration. In addition, lipopolysaccharide (LPS) has been demonstrated to induced macrophage inflammation through TLRs, leading to the release of proinflammatory cytokines [19]. In patients with sepsis, increased serum lactate levels increased mortality through activation of innate immune and inflammatory responses [20, 21]. Through the CIBERSORT method, we found that the infiltration value of neutrophils and M2 macrophages in the myocardium of SC is significant. Therefore, we found that immunity and inflammation play important roles in myocardial dysfunction in SC. In this condition, neutrophils were positively correlated with immune-related genes, and M2 macrophages were negatively correlated with immune-related genes.
Macrophages are the “frontier soldiers” of innate immunity. The function of macrophages is classified into two types, type M1 (classically activated) and type M2 (alternatively activated). Type M1 macrophages can secrete chemokines for a proinflammatory function, and type M2 macrophages mainly secrete chemokines in the late stage of inflammation to play an anti-inflammatory role [22, 23]. M2 macrophages mainly promote tissue remodeling and repair, and previous studies showed that an increase in M2 macrophage infiltration in myocardium promotes fibrosis (23–25). The decrease of M2 macrophages in SC leads to a reduction of anti-inflammatory chemokines and supports the progression of SC. The polarization of the M1 macrophage is mainly regulated by transcription factors IRF5 and STAT1, and the M2 macrophage is regulated by IRF4 and STAT6 [26]. Many immunomodulators can promote M1 macrophage polarization, such as IFN, TNF, IL-1, IL-6, LPS, B-cell activator (BAFF), and proliferation-inducing ligand (APRIL) (26–29). In addition, some metabolites, such as saturated fatty acids and oxidized lipoproteins, can also induce M1 macrophage polarization [27]. Similarly, inflammatory factors, such as IL-4, IL-13, IL-10, IL-33, and TGF, as well as metabolites such as unsaturated fatty acids and retinoic acids, can induce M2 macrophage polarization [30, 31]. Likewise, neutrophils are key factors in the immune response to sepsis. Under normal conditions, neutrophils control infection, but excessive stimulation or dysregulated neutrophil functions are believed to be responsible for sepsis pathogenesis [12]. In SC, we found significant neutrophil infiltration in cardiac tissues.
We screened 10 hub genes from the PPI network constructed from 148 genes. These 10 genes are also involved in several immune-related pathways directly, which include LRG1, LCN2, PTX3, ELANE, TCN1, CLEC4D, FPR2, MCEMP1, CEACAM8, and CD177. Next, we used online tools (the ToppGene and Metascape databases) to explore the function of the hub genes. The results showed that the hub genes were related to immune-related pathways and diseases. Among these genes, LRG1 is highly correlated with neutrophils and other genes composed of TCN1, FPR2, CLEC4D, and CD177. LRG1 is expressed during granulocyte differentiation. From the GeneCards database,6 we found that the super pathway for LRG1 is the innate immune system. BP terms included response to bacterium, positive regulation of transforming growth factor-beta receptor signaling pathway, neutrophil degranulation, and similar terms. Recombinant human LRG is used as a diagnostic aid in acute appendicitis [32]. Similarly, LRG1 may be used as a diagnostic marker for SC.
In our research, we found that LCN2 and PTX3 in 10 hub genes existed in DEGs, as key genes. LCN2, encoding the lipocalin-2 (also known as neutrophil gelatinase-associated lipocalin), is a critical iron regulatory protein during physiological and inflammatory conditions and exerts mostly a protective role in inflammatory bowel diseases and urinary tract infection by limiting bacterial growth [33, 34]. In the heart, some reports also indicated that LCN2 was significantly expressed during in vivo and in vitro experiments on cardiac hypertrophy and heart failure, and high plasma LCN2 was correlated with high mortality and myocardial dysfunction in severe sepsis [35, 36]. Nevertheless, Guo et al. [ 37] found that LCN2–/– mice displayed an up-regulation of M1 macrophages but down-regulation of M2 macrophages. These mice had profound up-regulation of proinflammatory cytokines, suggesting that LCN2 plays a role as an anti-inflammatory regulator in macrophage activation. Overexpression of LCN2 is consistent with down-regulation of M2 macrophages in SC. PTX3 plays a role in the regulation of innate resistance to pathogens and inflammatory reactions. Paeschke et al. [ 38] showed that inflammatory injury of heart tissue was aggravated in mice when PTX3 was knocked down. Yamazaki et al. [ 39] demonstrated that bacterial LPS, induced expression of anti-microbial glycoproteins-PTX3 and LCN2 in macrophages. Therefore, we concluded that LCN2 and PTX3 might lead to excessive anti-inflammatory effects for SC progression.
Our study has some limitations. First, the sample size of this study is relatively small and additional clinical samples are necessary. However, we barely obtain more, due to the difficulty in obtaining SC samples. Besides, despite that LCN2 and PTX2 are related to neutrophil function as reported, there is no direct evidence to validate that LCN2 and PTX3 are involved in the progression of SC. Simultaneously, the underlying mechanism by which LCN2 and PTX3 affect SC remains unclear. Last but not the least, although our conclusion is based on bioinformatics analysis, more experimental results will help to increase the reliability of the results. We expect further understanding of the regulatory functions of key genes on SC through other means.
## 5. Conclusion
To sum up, we found that genes in three modules played vital roles in the immune-related pathways. LCN2 and PTX3 were key genes in SC progression and mainly showed anti-inflammatory effects. The significant immune cells in cardiomyocytes of SC were neutrophils and M2 macrophages. Therefore, LCN2 and PTX3 may have the potential to perform as prognostic and therapeutic targets in the clinical management of SC. Excessive anti-inflammatory function and neutrophil infiltration were probably the primary causes of SC, but this needs further analysis.
## Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary material.
## Author contributions
DM, XQ, and Z-AZ analyzed the data and drafted the manuscript. HL and PC edited the manuscript. BZ supervised the project, gave advice regarding the project design, and edited the manuscript. All authors contributed to the article and approved the submitted version.
## Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
## Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
## Supplementary material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fcvm.2022.1036928/full#supplementary-material |
# Increased Expression of Autophagy-Related Genes in Alzheimer’s Disease—Type 2 Diabetes Mellitus Comorbidity Models in Cells
## Abstract
The association between Alzheimer’s disease (AD) and type 2 diabetes mellitus (T2DM) has been extensively demonstrated, but despite this, the pathophysiological mechanisms underlying it are still unknown. In previous work, we discovered a central role for the autophagy pathway in the common alterations observed between AD and T2DM. In this study, we further investigate the role of genes belonging to this pathway, measuring their mRNA expression and protein levels in 3xTg-AD transgenic mice, an animal model of AD. Moreover, primary mouse cortical neurons derived from this model and the human H4Swe cell line were used as cellular models of insulin resistance in AD brains. Hippocampal mRNA expression showed significantly different levels for Atg16L1, Atg16L2, GabarapL1, GabarapL2, and *Sqstm1* genes at different ages of 3xTg-AD mice. Significantly elevated expression of Atg16L1, Atg16L2, and GabarapL1 was also observed in H4Swe cell cultures, in the presence of insulin resistance. Gene expression analysis confirmed that Atg16L1 was significantly increased in cultures from transgenic mice when insulin resistance was induced. Taken together, these results emphasise the association of the autophagy pathway in AD-T2DM co-morbidity, providing new evidence about the pathophysiology of both diseases and their mutual interaction.
## 1. Introduction
Alzheimer’s disease (AD) is the most common cause for dementia, and 55 million people are estimated to live with this condition worldwide. Numbers are expected to increase to 113 million by 2050 [1], causing enormous impacts on global health and imposing a huge economic burden. Therapeutic approaches encompass cholinesterase inhibitors and memantine as symptomatic agents [2,3]. Great hopes have been raised by antibodies which target amyloid-beta aggregates in the brain as potential disease-modifying interventions [4,5]. However, whether meaningful clinical efficacy can be reached as well as cost-effectiveness are still questions, while safety concerns need further analyses and clarification [6,7,8]. Therefore, preventive actions directed at potentially modifiable risk factors are crucial to reduce AD severe disease burden [2].
*Both* genetic and environmental factors contribute to AD risk. Dominantly inherited mutations in APP, PSEN1, and PSEN2 genes account for rarer early-onset cases, whereas carrying at least one copy of the APOE ε4 allele is the strongest genetic risk factor for the common late-onset form [9]. Although age is the most relevant factor providing the largest impact, additional environmental components present important contributions that are potentially modifiable [2]. Among the latter, consistent evidence supports major roles for education, hypertension, obesity, hearing loss, traumatic brain injury, smoking, depression, physical inactivity, social isolation, type 2 diabetes mellitus (T2DM), and air pollution as potential targets of intervention in different life stages, particularly at midlife [2]. In addition, T2DM has been compellingly associated with significantly greater risk of dementia [10,11,12]. Moreover, metabolic syndrome and obesity, which are often associated with T2DM, represent dementia risk factors per se, thus further complicating the picture [13,14]. It has been suggested that since T2DM is modifiable, its reduction could constitute a possible strategy for reducing future AD incidence. Indeed, it has been estimated that if T2DM was removed as a risk factor, about $1.1\%$ of dementia cases could be prevented. Although this percentage is low, the number of impacted people is nonetheless high when considering global incidence rates [12].
Despite the demonstrated convincing association between AD and T2D, the pathophysiological mechanisms responsible are still unknown. As a result, the best approach to be adopted for prevention still needs to be elucidated [12]. Furthermore, whether antidiabetic treatments represent a useful way forward is uncertain at present, as available data are inconsistent [2,15,16]. Several hypotheses have been proposed to explain the mechanistic link between AD and T2DM [17,18,19]. Among them, insulin signalling is impaired in both AD and T2DM, and the definition of AD as type 3 diabetes is based on the observed insulin resistance [20,21,22,23]. In addition, defects in mitochondrial function are shared by both AD and T2DM, thus a common causative role has been proposed for this defect based on preclinical and clinical findings [24,25]. In a previous study, we adopted a system biology approach to address this important gap in knowledge about the common pathophysiological dysregulations contributing to AD and T2DM comorbidity. We compared molecular mechanistic networks underlying brain T2DM pathophysiology in AD and control subjects by analysing transcriptional datasets with a novel approach. We discovered a central role for the autophagy pathway in the mechanisms shared between AD and T2DM [26]. Autophagy is an intracellular degradation pathway that traffics organelles, dysfunctional proteins, and infectious agents to lysosomes via specific vesicles called autophagosomes [27]. In agreement with our findings, autophagy relevance in AD is supported by a wealth of data, and targeting this mechanism is proposed as a potential avenue for drug discovery [28,29,30]. Moreover, abnormal autophagic responses have been implicated in metabolic disorders [31].
The aim of this study was to further investigate the role of genes identified in our previous studies as relevant for the pathophysiology of Alzheimer’s disease and T2DM comorbidity, namely ATG16L1, ATG16L2, GABARAP, GABARAPL1, GABARAPL2, and SQSTM1. We thus investigated the modulation of these genes in an animal model of AD and in cellular models of insulin resistance in Alzheimer’s disease brains.
## 2.1. Antibodies
In immunofluorescence experiments, the following antibodies and dilutions were used: anti-Phospho SQSTM1/p62 (S349) (Abcam, Cambridge, UK, cat # ab211324) 1:100; anti-SQSTM1/p62 (Abcam, cat # ab56416) 1:50; anti-β-Tubulin III (Merck Millipore, Burlington, MA, USA cat # T2200) 1:500; anti-MAP2 (1:500, Merck Millipore, cat # M9942) 1:500; anti-GFAP (Thermo Fisher Scientific, Waltham, MA, USA, cat # 13-0300), 1:800, donkey anti-rabbit-IgG Alexa Fluor 488 (Thermo Fisher Scientific, cat # R37118) 1:1000; donkey anti-mouse-IgG Alexa Fluor 594 (Thermo Fisher Scientific, cat # A-21203, 1:1000); goat anti-mouse IgG1 CF 568 (Merck, cat # SAB4600313 1:1000); goat anti-rat Alexa Fluor 647 (Thermo Fisher Scientific, cat # A21247, 1:1000); and DAPI (4′,6-diamidino-2-phenylindole Merck Millipore, cat #D9542) 1:5000. In Western blotting experiments, the following antibodies and dilutions were used: anti-Phospho SQSTM1/p62 (S349) (Abcam cat # ab211324) 1:2000; anti-SQSTM1/p62 (Abcam cat # ab56416) 1:2000; Anti-GAPDH (Abcam cat # ab8245); 1:5000; anti-phospho-Akt (Ser473 D9E Cell Signaling, Danvers, MA, USA, cat #4060) 1:2000; anti-Akt (Cell Signaling, cat # 9272, 1:1000); goat anti-mouse IgG IRDye 800(Li-Cor, Lincoln, NE, USA, cat # 926-32210) 1:5000; and goat anti-rabbit-IgG Alexa 680 (Thermo Fisher Scientific cat # A21076) 1:5000.
## 2.2. Animals
A colony of triple-transgenic AD mice (3xTg-AD) expressing three mutant human transgenes—PS1M146V, APPSwe, and tauP301L—was established at the University of Verona by purchasing transgenic mice from The Jackson Laboratory (Sacramento, CA, USA). C57BL/6J mice were purchased from Charles River Italia (Calco, Italia). Mice were housed at 3/cage at a constant room temperature of 21 ± 1 °C and maintained on a 12:12h light/dark cycle with lights on at 7.30 a.m. with freely available food and water. All efforts to minimise animal suffering and number were made. This study is compliant with ARRIVE guidelines [32]. Procedures involving animals were conducted in conformity with the EU guidelines ($\frac{2010}{63}$/UE) and Italian law (decree $\frac{26}{14}$) and were approved by the University of Verona’s ethical committee and the local authority’s veterinary service. The Italian Health Ministry Ethical Committee for Protection of Animals approved the study (approval number: $\frac{283}{2019}$-PR). *For* gene expression studies, 18 (six/group aged 6, 12, and 18 months) female transgenic mice and 18 (six/group) female wild-type mice were used. For immunofluorescence on brain sections, 12-month-old female 3xTg-AD and wild-type mice were used ($$n = 3$$/group). Mice for gene or protein expression experiments were anesthetised using Tribromoethanol (Merck Millipore) and sacrificed. Brain dissections were performed in Petri dishes on ice; the hippocampi were collected, flash-frozen in liquid nitrogen, and stored at −80 °C until analysis. The whole procedure did not exceed 5 min to preserve brain integrity. Mice for immunofluorescence experiments were anesthetised using Tribromoethanol, perfused transcardially with 0.1 M phosphate buffered saline solution (PBS), followed by formaldehyde $10\%$ V/V, buffered $4\%$ w/v (Titolchimica, Rovigo, Italy), and brains were extracted and postfixed overnight. Seven dams were used (wild-type: $$n = 4$$; 3xTg-AD $$n = 3$$) and neuronal cultures were prepared from 5–6 pups/preparation for each genotype.
## 2.3. Neuronal Cultures
Human glioblastoma H4 cell lines stably expressing the βAPP-Swe mutation (K595N/M596L) were a kind gift from prof. M. Pizzi, University of Brescia, Italy. Cells were cultured in DMEM with $10\%$ foetal bovine serum (FBS), 100 Units/mL penicillin, 2 mM glutamine, and 100 μg/mL streptomycin (Thermo Fisher Scientific) [33]. After reaching $80\%$ confluence and twenty-four hours before starting the experiment, cells were trypsinised and seeded at a density of 4 × 106 cells in T75 cm2 flasks. Treatments were performed in DMEM medium without FBS.
## 2.4. Primary Cortical Neurons
Primary mouse cortical cultures were prepared as previously described [34] with modifications [35]. Briefly, newborn C57BL/6 and 3xTg-AD mice (P0-P1) brains were isolated and cortices were dissected in 1X ice-cold DBPS medium (cat # 14200075, Thermo Fisher Scientific). After removal of meninges, cortices were washed twice and enzymatically digested with DPBS solution containing $0.25\%$ (v/v) trypsin (Thermo Fisher Scientific), 1 mM sodium pyruvate, $0.1\%$ (w/v) glucose, and 10 mM HEPES pH 7.3 for 20 min at 37 °C. Following a 5 min incubation with 0.1mg/mL DNAse I (Merck Millipore) at room temperature, the enzymatic reaction was stopped with an MEM solution containing $10\%$ FBS, $0.45\%$ (w/v) glucose, 1 mM sodium pyruvate, 2 mM L-glutamine, 100 U/mL penicillin, and 100 μg/mL streptomycin (all reagents from Thermo Fisher Scientific). Next, the tissue was triturated through a P1000 pipette, and the cell suspension was passed through a 70 µm MACS SmartStrainer (Miltenyi Biotec, Bergisch Gladbach, Germany). Cells were then counted and diluted to 8 × 105 cells/mL in Neurobasal™-A Medium (NBA, Thermo Fisher Scientific) containing 1X B27 supplement (Thermo Fisher Scientific), 2 mM L-Glutamine (Thermo Fisher Scientific), 100 U/mL penicillin, and 100 μg/mL streptomycin (Thermo Fisher Scientific) and plated on 6-well plates pre-coated with 0.1 mg/mL poly-L lysine (Merck Millipore). Cells were maintained in a standard, humidified $5\%$ CO2 incubator until the day of the experiment (5–7 days in vitro, DIV).
## 2.5. Insulin Resistance
To monitor insulin response, cells were challenged with 100 nM insulin (Merck Millipore) for 30 min. To induce insulin resistance, cells were treated for 24 h with 40 mM glucose (Merck) or 20 nM insulin before receiving insulin challenge [36]. Controls were treated with vehicle. At the end of the experiments, both H4Swe cells and primary mouse neurons were washed with PBS and harvested by 5 min centrifugation at 2900× g, and the pellets were re-suspended in RNA later (Thermo Fisher Scientific), stored at 4 °C for 24 h, and transferred at −20 °C until RNA extraction. Treatments were repeated in 3–6 independent experiments.
## 2.6. Quantitative Real-Time PCR
Gene expression was assessed by qPCR as previously reported [37] with slight changes. RNA was extracted with the Aurum total RNA mini kit (Bio-Rad, Hercules, CA, USA) which includes a DNase I digestion step, following manufacturer’s instructions. RNA amount was assessed by UV absorbance in a NanoDrop 2000c spectrophotometer (Thermo Fisher Scientific). cDNA was synthesised using the iScript Advanced cDNA synthesis Kit (Bio-Rad). qPCR was performed by Sybr Green technology in a 7900HT Fast Real-Time PCR System (Thermo Fisher Scientific) with SSO Advanced Universal SYBR Green Supermix (Bio-Rad) in 20 µL according to this protocol: stage 1: 95 °C, 20 s; stage 2: 40 × (95 °C, 3 s; 60 °C, 30 s). Primers were selected with the NCBI Primer-BLAST tool and purchased from Eurofins Italia (Torino, Italy). Sequences are reported in Table 1. Data were analysed using the Delta-Delta-Ct method, converting to a relative ratio (2−DDCt) for statistical analysis [38] by normalising to the geometric average of two endogenous reference genes: Gapdh and Ywhaz, as previously reported [39,40]. The specificity of amplification products was evaluated by building a dissociation curve in the 60–95 °C range.
## 2.7. Western Blot
Hippocampi were homogenised with a micro-pestle in ice-cold lysis buffer ($10\%$ w/v) containing 50 mM Tris-HCl (pH 7.5), $2\%$ Igepal, 10 mM MgCl2, 0.5 M NaCl, 2 mM EDTA, 2 mM EGTA, 5 mM benzamidine, 0.5 mM phenyl-methylsulfonyl fluoride, 8 mg/mL pepstatin A, 20 mg/mL leupeptin, 50 mM β-glycerolphosphate, 100 mM sodium fluoride, 1 mM sodium vanadate, 20 mM sodium pyrophosphate, and 100 nM okadaic acid. Homogenates were clarified by 1 min centrifugation at 10,000× g at 4 °C and protein concentration was assessed by Precision Red Protein Quantification Assay (Cytoskeleton). H4Swe cells were seeded into 6-well plates at a density of 9.5 × 105 cells/well. Following treatments, cells were washed once in Tris-buffered saline (TBS), lysed, and assayed for protein with the Bradford method (Merck). In both instances, lysates were processed for Western blot as previously reported [41], with slight changes. Briefly, lysates were separated using 4–$12\%$ Bis-Tris gels (Novex pre-cast gel, Thermo Fisher Scientific) and transferred to 0.45 μm nitrocellulose membranes (Thermo Fisher Scientific). Blots were blocked for 1 h at room temperature in 1X Odyssey blocking buffer (TBS) and incubated with primary antibodies overnight in Odyssey blocking buffer (TBS) plus $0.1\%$ Tween-20 (Tween-20 TBS) at 4 °C. Membranes were washed 3 × 10 min in Tween-20 TBS at room temperature, followed by incubation with secondary antibody conjugated to IRDye diluted in Tween-20 TBS for 1 h at room temperature. Blots were washed 2 × 10 min in TBST, 1 × 10 min in TBS, and visualised with Odyssey Infrared Imaging System (Li-Cor) by quantifying fluorescent signals as Integrated Intensities (I.I. K Counts) using the Odyssey Infrared Imaging System. After background subtraction, protein levels were assessed as total protein to Gapdh loading control ratios or as phosphorylated to total protein ratios.
## 2.8. Immunofluorescence
In brain sections, immunofluorescence was carried out as previously reported [42]. Briefly, after post-fixing, brains were embedded in an OCT cryoembedding matrix and sectioned on the coronal plane at 30 mm thickness with a cryostat. Sections were treated with a blocking solution of $2\%$ bovine serum albumin, $2\%$ normal goat serum, and $0.2\%$ Triton X100 in PBS for 20 min at room temperature and incubated overnight at 4 °C in primary antibodies. Secondary antibodies were diluted 1:1000 in the above blocking solution, with the appropriate serum. After immunohistochemical processing, sections were counterstained with the fluorescent nuclear marker DAPI (100 ng/mL) for 10 min at room temperature and mounted on slides with $0.1\%$ paraphenylenediamine in glycerol-based medium ($90\%$ glycerol $10\%$ PBS). For H4Swe cell immunostaining, 5 × 105 cells/well were seeded onto 18 mm round coverslips in 24-well plates and left to attach overnight. The next day, cells were washed twice with PBS and fixed with $4\%$ paraformaldehyde for 20 min. Fixed cells were treated for 10 min with blocking and incubated overnight with primary antibodies in blocking solution. After three washes with PBS, samples were incubated with secondary antibodies diluted 1:2000 in blocking solution for 1 h. After final washes, coverslips were treated with DAPI solution. Coverslips were fixed onto glass slides with a drop of anti-fading mounting medium and sealed with nail polish. Primary cortical cells were fixed in $10\%$ (v/v) formalin solution (Titolchimica) for 15 min at room temperature, washed three times in PBS, and blocked in PBS containing $10\%$ (v:v) normal goat serum (Thermo Fisher Scientific) and permeabilised with $0.3\%$ (v:v) TritonX-100 (Merck Millipore) in PBS for 40 min. Next, cells were incubated with mouse anti-Map2 and rat anti-Gfap primary antibodies overnight at 4 °C, and after three PBS washing steps, with anti-secondary antibodies, anti-mouse IgG1 CF 568, and anti-rat Alexa Fluor 647 for 1 h at room temperature. Antibodies were diluted in PBS containing $5\%$ (v:v) normal goat serum. Nuclei were counterstained with DAPI 1:5000 and coverslips were mounted on slides using DAKO fluorescence mounting media (Agilent, Santa Clara, CA, USA). Images at different Z-planes were collected on a Leica tcs-sp5 confocal microscope. Images were processed with the software Imaris (Bitplane AG, Belfast, UK) or ImageJ.
## 2.9. Statistical Analysis
The data are presented as the observed mean values ± SEM. The data were analysed using a 1-way ANOVA with treatment (control, insulin, glucose + insulin, insulin + insulin) as the treatment factor or 2-way ANOVA with genotype (wild-type, 3xTg-AD) and age (6, 12, and 18 months) or treatment (control, insulin, glucose + insulin, insulin + insulin). When the samples were analysed in different plates using a complete block design, an additional blocking factor plate was also included in the statistical model in order to account for any plate-to-plate variability [43]. The analyses were followed by planned comparisons of the predicted means. The analysis was performed using the InVivoStat v4.4.0 software [44]. The data were log-transformed, where appropriate, in order to stabilise the variance and satisfy the parametric assumptions. A value of $p \leq 0.05$ was considered statistically significant.
## 3. Results
Since comorbidity is frequently observed between AD and T2DM, in our previous study [26], we applied a systems biology approach to investigate if common pathophysiological alterations could be identified at a molecular level. Similar approaches had previously highlighted the role of shared cellular signalling pathways contributing to both T2DM and AD. Among them, a prominent role was discovered for neurotrophin, PI3K/AKT, MTOR, and MAPK signalling, as well as for microglial-mediated immune responses, which can cross-talk to each other [45]. In addition, our previous data revealed a central role for autophagic mechanisms; in particular, a number of autophagy-related genes were indicated as important players, namely ATG16L1, ATG16L2, GABARAP, GABARAPL1, GABARAPL2, and SQSTM1. Therefore, we first aimed to investigate whether these genes were specifically modulated in association with neurobiological alterations characterising AD. We thus analysed the expression of the respective mouse orthologues (Atg16l1, Atg16l2, Gabarap, GabarapL1, GabarapL2, Sqstm1) in a transgenic mouse model of AD. 3xTg-AD mice harbour three mutant genes for the beta-amyloid precursor protein (βAPPSwe), presenilin-1 (PS1M146V), and tauP301L [46,47]; as a consequence, the mice progressively develop plaques and tangles, as well as cognitive impairments [47,48,49]. We thus compared hippocampal gene expression between 3xTg-AD mice and the respective wild-type controls at different ages. At 6 months, Atg16L1, Atg16L2, and GabarapL1 were expressed at significantly higher levels in 3xTg-AD mice (Figure 1A,B,D). In contrast, at 12 months, GabarapL2 expression was significantly reduced, whereas Sqstm1 levels were elevated (Figure 1E,F).
At the protein level, although increased Sqstm1 mRNA expression was observed qualitatively in the hippocampus (Figure 2A), the increase could not be confirmed in semi-quantitative Western blotting experiments, possibly because of the lower sensitivity of the technique (Figure 2B,C).
Next, we investigated whether these genes were modulated in the presence of AD-T2DM comorbidity. To model this condition, we first employed the human glioblastoma H4 cell line stably expressing the βAPP-Swe mutation [50,51,52] and applied treatments able to induce insulin resistance [53,54]. In this model, phospho-Akt/Akt levels were significantly increased by the insulin challenge (100 nM), whereas this response was abated after chronic treatment with high-concentration insulin, thus showing that insulin resistance was successfully achieved (Figure 3).
Similar to findings obtained in 3xTg-AD mice, in the presence of insulin resistance, Atg16L1, Atg16L2, and GabarapL1 expression levels were significantly increased (Figure 4A,B,D).
Subsequently, we examined whether Sqstm1 phosphorylation levels were affected by the onset of insulin resistance. No significant differences were revealed by Western blot or immunofluorescence analyses (Figure 5).
Next, we generated a second AD-T2DM cellular model by inducing insulin resistance in neuronal primary cultures obtained from 3xTg-AD mice and wild-type controls. In this model, we confirmed that primary cultures were enriched in neurons (Figure 6).
Gene expression analysis confirmed that Atg16L1 was significantly increased in cultures from transgenic mice when insulin resistance was induced (Table 2), whereas no other difference was detected in the other genes analysed. In addition, Gabarap showed a significant reduction by genotype (Table 2). However, the findings showed a very high level of variability within groups.
## 4. Discussion
In this study, we examined the modulation of genes recognised as relevant for the common cellular dysregulations sustaining the observed comorbidity between AD and T2 DM in our previous systems biology study [26]. Here, we explored their expression in 3xTg-AD mice, a transgenic mouse model of AD overexpressing mutated human genes associated with early-onset AD (PSEN1 and APP) or with the formation of neurofibrillary tangles (tau) [46]. In this mouse model, the neuropathological features of AD, amyloid plaques and neurofibrillary tangles, as well as neuroinflammation, developed progressively in an age-dependent fashion. In particular, extracellular amyloid beta deposition started at six months of age and progressively increased to reach its full extent at 15 months [47,49]. Tau pathology followed a similar age-related increase, although delayed with respect to amyloid beta pathology [46,47,49]. Likewise, cognitive impairments reproducing the human pathological feature of AD appeared at six months and became progressively more severe at 12 and 20 months [49]. We discovered that at 6 months of age, Atg16L1, Atg16L2, and GabarapL1 were expressed at higher levels in 3xTg-AD mice, whereas at later time points, this increase subsided. The alterations are in agreement with those obtained in the previous study, where pre-frontal cortex samples were analysed in two AD mice models [26]. These findings suggest that the increased expression may occur as an attempt to oppose the neuropathological alterations by activating a neuroprotective response.
A limitation of this experimental design is that 3xTg-AD mice were generated in a hybrid C57BL/6:129 genetic background; therefore, the control line we used, although similar, is not identical. However, the use of C57BL/6 as a control strain is well documented in previous studies [55,56,57,58].
To reproduce the molecular dysregulation characterising insulin resistance in AD brains, we used neuronal models of AD based either on a neuronal cell line generating amyloid beta deposits, H4Swe cells, or on the 3xTg-AD mouse primary neuron cultures. H4Swe cells are well established as tools to investigate AD-related cellular dysregulation [50,51,52]. However, a limitation is that they do not share all neuronal characteristics, being a neuroglioma-derived line. Therefore, primary neurons were also investigated. In both in vitro models, we established a condition of insulin resistance by prolonged treatment with high insulin concentrations. As a consequence, the normal response to insulin challenge is hampered by prolonged insulin exposure, and the normal Akt phosphorylation and activation responses characterising the insulin signal transduction pathway are not induced [53]. Similar to findings in 3xTg-AD mice, we found that Atg16L1, Atg16L2, and GabarapL1 were significantly elevated in insulin resistance conditions. The increased expression of these genes in the cell model of AD-T2DM comorbidity corroborates the hypothesis of a neuroprotective role of this response, as hyperglycaemia has been previously associated with the increased beta amyloid plaque production [59].
Atg16L1 was identified as the mammalian orthologue of the corresponding yeast gene, which was known to provide a crucial contribution to autophagic processes [60,61]. Autophagy was discovered as a process occurring in response to cellular stresses such as nutrient deprivation, infection, or hypoxia. Its chief function is providing nutrients for vital cellular activities during fasting by degrading cellular components and releasing them back to the cytoplasm to be used again. However, in addition to this non-selective approach, further studies demonstrated that autophagy can selectively eliminate potentially harmful damaged mitochondria or protein aggregates [61,62]. Consequently, autophagy dysfunction has been implicated in several diseases and its components generated interest as potential pharmacological targets [28,62]. In autophagy, starvation signals promote the recruitment of autophagy proteins to a specific subcellular location, where they assemble a structure called the phagophore. An isolation membrane is gradually formed to isolate a portion of the cytosol and is finally sealed into a vesicle, termed the autophagosome, which contains cytoplasmic material. The autophagosome then fuses with the lysosomal membrane, and the autophagic body together with its cargo are degraded [62,63]. In this process, the role of Atg16L1 is essential for autophagy initiation, as its recruitment in the Atg12-Atg5 complex is required to engage autophagic proteins in the phagophore assembly site and contribute to its scaffolding by Atg8/LC3 protein lipidation [60,64,65,66]. Therefore, the increase observed in the present study suggests an effort to trigger autophagic responses to counteract the increased production of abnormal proteins and rescue insulin response.
In addition to its well-demonstrated role in canonical autophagy, Atg16L1 was shown to exert different functions related to a structural component specifically observed in the C-terminal of the mammalian protein compared to the yeast counterpart. This specific component is necessary for the Atg16L1-mediated lipidation of single membranes, a non-canonical autophagy pathway, and specific cargo recruitment [66]. Furthermore, Atg16L1 contributes to modulating the extent of the innate immune response to injuries or infection, with an anti-inflammatory role [66,67]. Recent results showed that aged mice lacking this C-terminal domain of Atg16L1 develop beta amyloid plaques, excessive tau phosphorylation, reactive microgliosis, and memory impairments [68]. The proposed mechanism points to Atg16L1 involvement in a process defined as LANDO (LC3-associated endocytosis), which contributes to TREM2, CD36, and TLR4 recycling [68]. Therefore, the observed increased Atg16L1 levels may contribute to establishing a protective response that goes beyond the activation of autophagic responses, but also involves a rescue from neuronal damages through different mechanisms. Interestingly, we observed increased Atg16L1 expression in all investigated models. This result reinforces the notion of a primary role of this protein in the cellular response to both AD and T2DM pathophysiology, in a fashion independent from the in vivo or in vitro model which is well-conserved through evolution both in mice and in humans.
Atg16L2 is a second isoform of Atg16L1, sharing a similar domain structure and a similar ability to bind Atg12-Atg5 and form a complex. However, the Atg16L2 protein is not recruited to phagophores and does not contribute to autophagosome formation; thus, it is not essential to canonical autophagy [69]. However, data suggesting the possibility of a cell-specific involvement in canonical autophagy are also available [70]. In addition, a recent report on the generation of Atg16L2 knock-out mice demonstrated a contribution of this gene to the maturation of immune cells and suggested that distinct functions are associated with respect to Atg16L1 [71]. Data showing its relevance in serious diseases such as Crohn’s disease and various cancers notwithstanding, very incomplete information is available on the role of Atg16L2 [72]. Our findings also support the involvement of this widely expressed gene in the pathophysiology of insulin resistance in AD brains.
The GabarapL1 protein belongs to the Atg8/LC3 autophagy proteins, which include six members: LC3A, LC3B, LC3C, Gabarap, GabarapL1, and GabarapL2. The recruitment of Atg8 family proteins to the forming phagophore is mediated by the above-mentioned Atg12-Atg5–Atg16L1 complex and is essential for phagophore elongation and, ultimately, for autophagy [62,63,73]. GabarapL1 has also been implicated in autophagosome fusion with lysosome, and these functions are supposed to contribute to the degradation of oncogenic proteins and exert tumour-suppressive functions [73]. Interestingly, GabarapL1 has been specifically implicated in a newly discovered selective autophagy process termed glycophagy, which is involved in the transport and delivery of glycolytic fuel substrates [74]. Since these pathways regulate cellular energy demand, compelling evidence links glycophagy-mediated glucose availability to energy metabolism, in agreement with our findings.
With regard to Sqstm1 levels, contrasting findings have been previously reported. In agreement with the present results, no alterations were detected in the hippocampus or in mitochondria-enriched hippocampal fractions of young 3xTg-AD mice [75,76]. Conversely, a decrease was found in whole brain homogenates and in the mitochondria-enriched hippocampal fractions of old 3xTg-AD mice [76,77,78].
## 5. Conclusions
This study investigated the molecular underpinning of the comorbidity between AD and T2DM in cellular models of insulin resistance in the presence of AD-related neuropathological features. Our findings are in agreement with the hypothesis that impaired autophagic mechanisms are important in the pathophysiology of AD through nonstandard mechanisms. In particular, the autophagy-related genes Atg16L1, Atg16L2, and GabarapL1 were highlighted as having a more relevant function in this mechanism, in addition to GabarapL2 and Sqstm1. |
# Sleep Disturbances in Generalized Anxiety Disorder: The Role of Calcium Homeostasis Imbalance
## Abstract
Patients with a generalized anxiety disorder (GAD) often report preeminent sleep disturbances. Recently, calcium homeostasis gained interest because of its role in the regulation of sleep–wake rhythms and anxiety symptoms. This cross-sectional study aimed at investigating the association between calcium homeostasis imbalance, anxiety, and quality of sleep in patients with GAD. A total of 211 patients were assessed using the Hamilton Rating Scale for Anxiety (HAM-A), Pittsburgh Sleep Quality Index questionnaire (PSQI) and Insomnia Severity Index (ISI) scales. Calcium, vitamin D, and parathyroid hormone (PTH) levels were evaluated in blood samples. A correlation and linear regression analysis were run to evaluate the association of HAM-A, PSQI, and ISI scores with peripheral markers of calcium homeostasis imbalance. Significant correlations emerged between HAM-A, PSQI, ISI, PTH, and vitamin D. The regression models showed that patients with GAD displaying low levels of vitamin D and higher levels of PTH exhibit a poor subjective quality of sleep and higher levels of anxiety, underpinning higher psychopathological burden. A strong relationship between peripheral biomarkers of calcium homeostasis imbalance, insomnia, poor sleep quality, and anxiety symptomatology was underlined. Future studies could shed light on the causal and temporal relationship between calcium metabolism imbalance, anxiety, and sleep.
## 1. Introduction
Sleep is a basic human need and is essential for good health, well-being, and good quality of life. We spend nearly a third of our life sleeping. However, people often experience difficulties in sleeping that may become disabling and result in daytime dysfunction [1,2,3]. According to the third edition of the International Classification of Sleep Disorders (ICSD-3), insomnia is characterized by difficulty in either initiating, maintaining, or continuing sleep, despite the adequate opportunity and condition for sleep. Nowadays, insomnia represents the most common sleep disorder [4,5] affecting especially women and older people, and it coexists very frequently with general health problems (e.g., cardiovascular diseases, chronic pain syndrome, diabetes, obesity, asthma) [6]. Sleep disturbances are commonly detected in the general population and individuals with psychiatric disorders [7]. Considering that sleep can affect mental health, having a psychiatric disorder, in turn, could impact on sleep quality. Studies indicate that insomnia very often coexists with psychiatric disorders [8]. Particularly, insomnia is most frequently associated with major depression or an anxiety disorder, mainly, generalized anxiety disorder (GAD) [9].
About 60–$70\%$ of patients with GAD and panic disorder reported prominent sleep disturbances [9], leading to a negative impact on functioning and quality of life [10] and the course and treatment of psychiatric illness [11].
Sleep–wake regulation is classically described as resulting from the interaction of circadian and homeostatic processes [12], which in turn influence the opposite activity of neurons stimulating wakefulness and neurons stimulating sleep [13]. The dysregulation of this process and consequent insomnia seems to be linked to the alteration of different hormones such as insulin, cortisol, leptin, orexin, ghrelin or growth factor, and vitamin D [14,15,16,17,18,19,20].
In recent years, calcium homeostasis has received increasing interest, with research supporting the role of parathyroid hormone (PTH), vitamin D (Vit D), and calcium (Ca++) in mental health conditions [21]. Vit D, together with PTH, regulates the homeostasis of Ca++, modulating calcium transportation in the gut, bone, and kidney and the immune modulation, the antioxidant defense system, and several inflammatory processes [22,23,24]. By appropriate actions of Vit D and PTH, Ca++ is maintained in the range or promptly corrected if necessary. An alteration or defect of any of this system results in the calcium homeostasis imbalance. It was already demonstrated in schizophrenia [25], depression [26], bipolar disorder [27], anxiety [28,29,30], and sleep disorders [31,32,33,34,35].
This could be explained considering different activities of Vit D, Ca++, and PTH. Vit D receptors are widely expressed in all human bodies and brain [36,37,38,39] and their increased expression was demonstrated in specific brain regions involved in anxiety and sleep regulation, such as the prefrontal cortex and the limbic system [40,41]. In these areas, particularly in the prefrontal cortex [42], Vit D can directly increase the biosynthesis of dopamine/noradrenaline and serotonin [43,44,45,46], and improve the expression of the growth factor hormone and the BDNF [47,48,49]. Ca++ is very important in the central nervous system (CNS) as a cofactor, second messenger, and signaling molecule, and for transmitters release [50]. Additionally, PTH contributes to neuronal homeostasis [51] regulating circulating and intracellular calcium levels in the CNS [52].
Vit D has gained prominence due to its antioxidant, anti-inflammatory, pro-neurogenic, and neuromodulator properties that appear to be fundamental to its anxiolytic properties [53,54,55,56]. Data are supported by studies demonstrating that supplementation of Vit D can improve anxiety symptoms, [57,58,59] as well as sleep disorders and sleep quality [60]. On the other hand, experimental evidence has shown that Ca++ signaling plays a crucial role in regulating sleep–wake rhythms [61]. There is also evidence suggesting that increased dietary Ca++ intake improves anxiety [62], quality of sleep, and reduces insomnia [63,64]. Interestingly, total Ca++ presents a diurnal variability during normal sleep [65], underlining the role in regulating sleep duration in mammals [66], possibly due to the involvement in producing melatonin from tryptophan in the brain [67].
Although several studies investigated the co-occurrence of sleep disturbances and anxiety disorders [68,69], showing that the relationship between these two conditions is particularly complex [70], few studies focused on calcium homeostasis imbalance and data are not conclusive. Therefore, such experimental evidence led clinicians to comprehensively investigate the effect of calcium metabolism imbalance on anxiety disorders.
Based on the above, the current study aimed at investigating the association between calcium imbalance through the determination of Ca++, Vit D, and PTH levels, anxiety psychopathology severity, and altered hypnic pattern in a sample of patients suffering from a generalized anxiety disorder. Thus, the current study tries to explore whether calcium metabolism imbalance could be associated with sleep quality and worsening of symptoms in patients with anxiety disorders. Therefore, the aims of the present study are [1] to identify the association between calcium imbalance and quality of sleep in patients suffering from generalized anxiety disorder (GAD) and [2] to evaluate how this association may impact illness severity in patients suffering from GAD.
## 2. Materials and Methods
Consecutive outpatients were screened for eligibility at the Psychiatric Unit of the University Hospital Mater Domini in Catanzaro from May 2020 to July 2022. Inclusion criteria were age between 18 and 75 years; primary diagnosis of GAD according to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) [8]; and willingness to participate in the study. Participants were considered not eligible in cases of an inability to provide a written informed consent to participate in the study; presence of moderate or severe cognitive impairment as assessed at the first contact visit by Mini-Mental State Evaluation (MMSE) ≤22 [71]; comorbidity with neurologic diseases, endocrinological diseases (hypo/hyperparathyroidism), or substance and/or alcohol use disorders; pregnancy or post-partum period; current treatment with medications that can alter calcium metabolism, such as Vit D supplementation or calcium phosphonate or bisphosphonates.
Patients presenting comorbid depressive features were not excluded, considering the high prevalence of anxiety and depressive symptoms co-occurrence in clinical practice. However, we excluded patients with a severe or subthreshold depressive condition clinically evaluated at the moment of the enrollment.
All participants meeting the inclusion/exclusion criteria were recruited and included in the study after receiving a full description of the study aims and design and obtaining their written informed consent to participate in the study. The Structured Interview for DSM-5 Disorders, Clinician Version (SCID-5-CV) [72] was used for the diagnosis. All tests were performed by experienced psychiatrists who were trained in the administration of neuropsychiatric tests and used these tools in their daily clinical practice.
The study was carried out following the latest version of the Declaration of Helsinki and the protocol approval was obtained by the Ethics Committee of the University of Catanzaro ($\frac{307}{2020}$).
## 2.1. Procedures and Measures
Patients’ socio-demographic and clinical characteristics were collected using an ad hoc schedule evaluating sex, age, civil status, education, employment status, family history of psychiatric illnesses, and age at onset of the disorder.
## 2.1.1. Psychological Measures
Participant answered the following scales:Hamilton Rating Scale for Anxiety (HAM-A) [73], to assess the clinical severity of anxiety symptoms. The scale consists of 14 items scoring on a scale of 0 (not present) to 4 (severe). Each item is defined by a series of symptoms, and measures both psychic anxiety (mental agitation and psychological distress) and somatic anxiety (physical complaints related to anxiety). The total score ranges from 0 to 56, where <17 indicates mild severity, 18–24 mild to moderate severity, and 25–30 moderate to severe. Cronbach’s alpha was 0.934 in this study. Pittsburgh Sleep Quality Index Questionnaire (PSQI), to analyze sleep quality. The self-reported questionnaire is made up of 19 items, used to create seven components with a score ranging between 0 (no problem) and 3 (major problem), namely, subjective sleep quality (hereafter referred to as Quality), sleep latency (Latency), sleep duration (Duration), habitual sleep efficiency (Efficiency), sleep disturbances (Disturbances), use of sleeping medication (Medication), and daytime dysfunction (Dysfunction). The total score from these seven components varies between 0 (no problem) and 21 (major problem). A global score of ≥5 is used to identify people with poor sleep quality [74,75]. People with a score of 5 or higher, experienced poor sleep quality, and those with a score of less than 5 experienced good sleep quality. Cronbach’s alpha was 0.77 [76]. Cronbach’s alpha was 0.834 in this study. Insomnia Severity Index (ISI), to assess the nature, severity, and impact of sleep difficulties in the last 2 weeks. A 5-point Likert scale is used to rate the 7 items, with scores ranging 0–28 that yield four categories: absence of insomnia (0–7); subthreshold insomnia (8–14); moderate insomnia (15–21); and severe insomnia (22–28) [77]. Cronbach’s alpha was 0.784 in this study.
## 2.1.2. Biological Measures
Serum levels of calcium (mmol/L), 25-OH-vitamin D (ng/mL), and PTH (pg/mL) were assessed in the same laboratory to ensure standardized procedures. Blood samples were collected from all patients at recruitment after 12–14 h fasting.
Calcium was measured using standard laboratory methods. Blood was centrifuged, and serum was stocked at −30 °C for α,25 (OH)2 vitamin D and PTH and evaluated by chemiluminescence immunoassays using adequate kits (Diasorin Liaison; ADVIA Centaur). According to the Endocrine Society’s Clinical Practice Guideline, Vit D deficiency was considered when its values were <20 ng/mL; insufficiency between 21–29 ng/mL; and sufficiency between 30–100 ng/mL [78]. Levels of Ca++ between 8.9 and 10.01 mg/dL are considered normal, whilst the range 15–55 pg/mL is considered normal for the PTH.
Levels of Vit D < 20 ng/mL, Ca++ < 8.8 mg/dL or >10 mg/dL, and PTH < 15 pg/mL or >55 pg/mL were the cut-off considered for calcium homeostasis imbalance (Table 1).
## 2.2. Statistical Analysis
Descriptive statistics were calculated for socio-demographic and clinical characteristics, as well as for scores at relevant assessment instruments. The quantitative variables were expressed as mean and standard deviation (SD) and the qualitative variables as frequency and percentage (%).
A Spearman correlation analysis was used to assess the relationship between sleep quality, anxiety symptoms, and calcium homeostasis imbalance. Linear regression analysis was performed to further investigate the relationship between sleep quality, anxiety, and calcium homeostasis imbalance using PSQI, ISI, and HAM-A scores as dependents variables and PTH, calcium, and Vit D as independent variables. All tolerance values in the regression analyses were >0.1 and all variance inflation factors were <10, expressing that the assumption of multicollinearity was not violated. The p-value < 0.05 was considered significant in this study. Data were analyzed with the Statistical Package for Social Sciences Version 26 (SPSS, Chicago, IL, USA) [79].
## 3. Results
Overall, 211 participants suffering from GAD met the inclusion/exclusion criteria and were enrolled in the study. The average age (±standard deviation, SD) was 46.9 (±13.8). Most of the participants were female ($51\%$), married ($45.5\%$), graduated ($76\%$), employed ($63\%$), and with positive family history for psychiatric disorders ($64.5\%$). The mean age at onset was 27.8 ± 11.1. The mean of HAM-A total, PSQI total, ISI total was 25.6 ± 13.7, 10.96 ± 6.2 and 14.36 ± 8.2, respectively. Indices of calcium metabolism showed a normal calcium level 9.5 ± 0.4, higher PTH level (54.6 ± 20.5), and lower Vit D level (29.4 ± 25.1) (Table 2).
Table 3 includes the results of Spearman’s correlations between HAM total score, PSQI subscales and total score, ISI total score, calcium, PTH, and Vit D. Significant correlations emerged for all the variables, with the sole exception of Ca++.
A linear regression analysis was performed to assess the association between calcium imbalance, anxiety symptoms, and quality of sleep. In the three models, PSQI total, HAM-A total, and ISI total, respectively, were selected as dependent variables and PTH, Vit D, and Ca++ as independent variables. In the first model, higher PTH levels and lower Vit D levels (R2 = 0.603; $F = 80.752$; $p \leq 0.001$) predicted PSQI total; in the second model, higher PTH levels and lower Vit D levels predicted HAM-A total (R2 = 0.685; $F = 115.137$; $p \leq 0.001$), and in the last model, higher PTH levels and lower Vit D levels predicted ISI total (R2 = 0.672; $F = 105.516$; $p \leq 0.001$). Thus, an imbalance of PTH and Vit D levels predicted insomnia, higher levels of anxiety, and poor quality of sleep. See Table 4.
## 4. Discussion
This study found a strong relationship between calcium homeostasis imbalance, poor sleep quality, and anxiety symptomatology in patients suffering from GAD. To the best of our knowledge, this is the first study aimed at investigating the association between calcium homeostasis imbalance and quality of sleep in patients with GAD. The study findings suggest that patients with GAD and low levels of Vit D and higher levels of PTH exhibit insomnia, poor quality of sleep, and higher levels of anxiety, highlighting its impact on the psychopathological burden.
A growing body of literature focused on the calcium imbalance in psychiatric disorders [21,25,27,28,29,30,31,32,33,34,35] and our results are in line with them. In our sample, significant correlations emerged for PSQI, HAM-A, ISI, PTH, and Vit D. The association between poor sleep quality and high levels of PTH and low levels of Vit D may be read considering the sleep–wake dysregulation as a consequence of calcium imbalance [20]. Recently, a growing number of studies and a recent meta-analysis reported the link between Vit D and sleep [35]. Adequate levels of this hormone seem to be necessary for the maintenance of sleep, reducing the number of nocturnal awakenings [80] while low Vit D levels have been reported to be associated with shorter sleep duration [81,82]. Although the exact mechanism by which Vit D affects sleep regulation is still unclear, the key to this link seems to be the expression of Vit D receptors in the cortical and subcortical areas of the brainstem that are involved in sleep control [83] such as prefrontal cortex [84], cingulate gyrus [85], hippocampus [86], caudate nucleus [87], lateral geniculate nucleus [88], and substantia nigra [83,89].
Interestingly, Vit D is involved in regulating the conversion of tryptophan into 5-HTP and producing melatonin [90] from tryptophan in the brain [67,90]. Melatonin participates in the regulation of circadian rhythms [91] and adjusts the sleep–wake cycle with a consequent positive effect on the quality of sleep [92]. In fact, epidemiology studies found that dietary intake of Vit D was related to the midpoint of sleep, sleep duration, and maintaining sleep [93,94]. In this regard, it seems important to consider that in our sample some patients reported subthreshold depressive symptoms. The data is not surprising because it is well known that anxiety disorders, as well as sleep disturbances, often manifest in comorbidity with depressive symptoms [95]. In fact, other studies indicated that the serotoninergic pathway was implicated in the initiation and maintenance of sleep in different areas of the brain that have been associated with the sleep regulation and that Vit D plays a key function in the regulation of the serotonergic pathway [46] and melatonin production. Moreover, Vit D contributes to neuroplasticity [59] and in the synthesis of other neurotransmitters [96,97,98], confirming the importance of Vit D in sleep but also mood regulation [99].
Most studies evaluating anxiety-related symptoms in different populations indicate an association between low levels of Vit D and anxiety [28,100,101], and some reported that Vit D supplementation is associated with lower anxiety symptoms [102]. In our sample, the regression analysis confirmed the significant association between higher PTH and lower Vit D levels, poor quality of sleep, and anxiety symptomatology emphasizing the close relationship between calcium imbalance and psychopathology in patients with GAD. This finding can be explained by the role of calcium imbalance, especially Vit D, in many brain processes, including neuroimmunomodulation, neuroinflammation, oxidative stress, and neuroplasticity [59] and synthesis of neurotransmitters, all implicated in the pathogenesis of anxiety disorders [96,97,98]. In this regard, Vit D seems implicated in the synthesis of serotonin neurotransmitters through the tryptophan pathways [46]. The alteration of the serotonin synthesis is associated with the prefrontal cortex [103], hippocampal [104] and amygdala dysfunctions [105], brain regions important in regulating network activity, and neural oscillations in anxiety disorders [106,107].
On the other hand, many of the positive effects of Vit D on behaviors might be associated with its ability to regulate both peripheral and CNS immune responses. As noted, anxiety is frequently associated with a low-grade inflammatory status and peripheral increase of inflammatory cytokines [108,109]. As such, Vit D may help reduce anxiety symptoms because of its antioxidant and anti-inflammatory properties. More recently, the preclinical study described the anti-inflammatory and antioxidant effects of the pretreatment with Vit D3 underlying the ability of this vitamin to annul anxiety-like behaviors. Indeed, this effect was accompanied by a decrease in IL-6 levels [110]. Results were replicated in a clinical sample: Vit D supplementation in combination with standard of care improved the severity of anxiety in individuals diagnosed with GAD by increasing serotonin concentrations and decreasing the levels of the inflammatory biomarker neopterin [111].
The results of the present study should be read considering some limitations. First, the cross-sectional study design, the type of patients included (only outpatients), and the relatively small sample size does not allow to generalize to a large proportion of the psychiatric population and preclude establishing causal relationships. In this light, prospective studies are recommended. Second, the self-administered scale and the retrospective nature of the study were affected by the effect of recall bias and represent a structural limitation regarding the assembly and reliability of the data. Third, psychiatric medications are known to trigger symptoms of sleep disorders. Due to heterogeneity in our sample, patients were prescribed different psychotropic medications which would be difficult to control. Hence, it was not possible to examine the association between psychotropic medication and symptoms of sleep disorders. Lastly, the wide overlap of features and neurophysiological systems involved in anxiety and depressive symptoms, even if occurring only in a few patients of our sample, prevented us to examine the unique relationship between calcium imbalance and anxiety disorder. Further studies should assess the role that calcium imbalance plays in this relationship, distinguishing mood disorders from anxiety disorders and using major depressive disorder as a control group.
Despite these limitations, the major strengths of this study are represented by the focus on calcium imbalance and sleep quality in patients with GAD in a real-world setting with broad inclusion criteria. Furthermore, this was the first attempt to evaluate the role and implications of calcium homeostasis in GAD, considering its relationships to sleep and anxiety symptoms. Moreover, the study includes the concomitant assessment of Vit D, PTH, and Ca++ levels to assess and analyze the whole metabolism axis. Nevertheless, future large-scale prospective studies are needed to confirm the findings of this study and to better clarify the association between calcium imbalance, sleep quality, and psychopathology severity. Identifying and addressing sleep quality, insomnia, and calcium imbalance may have a positive impact on the prognosis and quality of life of patients with GAD.
## 5. Conclusions
In conclusion, the study found a strong association between levels of parathyroid hormone and Vit D, sleep quality, and anxiety symptomatology in patients suffering from GAD. The study results suggest that patients with GAD and low levels of Vit D and higher levels of PTH exhibit poor quality of sleep and higher levels of anxiety highlighting its impact on the psychopathological burden. Results should suggest that calcium homeostasis may be disrupted in this population but additional prospective studies in real-world settings with direct comparisons between these two conditions are needed. Therefore, it may represent an area of clinical research interest for the future, to reach more patients focused on clinical practice to anticipate a precise diagnosis, manage personalized treatment, and improve prognosis. Indeed, future studies could shed light on the causal and temporal relationship existing between calcium metabolism imbalance, anxiety, and sleep, opening new and interesting frontiers in both clinical and research fields. |
# Changes in the Histological Structure of Adrenal Glands and Corticosterone Level after Whey Protein or Bee Pollen Supplementation in Running and Non-Running Rats
## Abstract
Due to the many health-promoting properties of bee pollen and whey protein, both products are widely used as dietary supplements. According to these reports on their health-promoting properties, the aim of our study is to assess whether these products can influence the structure and function of the adrenal glands in rats. Thirty male Wistar rats were divided into six equal groups. Among them, there were three groups which included non-running rats and three groups which included running rats. Both of these running ($$n = 3$$) and non-running ($$n = 3$$) groups included non-supplemented (control groups), bee-pollen-supplemented groups, and whey-protein-supplemented groups. After 8 weeks, the rats were decapitated, their adrenal glands were collected, and paraffin slides were prepared. Then, staining according to the standard H&E and Masson’s trichrome protocols was performed. Fecal and urine samples were collected prior to the end of the study to measure corticosterone levels. In the group of non-running rats, the consumption of bee pollen was noted to be significantly higher when compared to the group of running rats ($p \leq 0.05$). The thickness of the particular adrenal cortex layers was similar among all of the groups ($p \leq 0.05$). The statistically significant changes in the microscopic structure of the adrenal glands, especially regarding cell nuclei diameter and structure, as well as the architecture of sinusoids, were observed between the groups. Moreover, urine corticosterone concentrations were found to vary between all of the analyzed groups ($p \leq 0.05$). These results indicate that both bee pollen and whey protein have limited stress-reducing potential.
## 1. Introduction
Stress is defined as a physiological response of the body to a state of danger. Such modification of homeostasis is achieved as a result of the complex interactions within the elements of the hypothalamus–pituitary–adrenal axis (HPA axis). As a subsequent consequence of axis activation, adrenal glands—the main organs involved in stress-response—secrete stress hormones, finally regulating the organism functioning at the multi-organ level [1,2].
The occurrence of stress reactions is conditioned multifactorial—both by internal and external stimuli. For sure, any differences in physical activity—its abandonment or limitation—are a possible source of stress induction [3,4]. Moreover, physical activity is also significant from the perspective of counteracting subsequent negative stress effects [5]. It is especially important taking into consideration that exposure to stress has many health consequences and may affect among others the cardiovascular system, the nervous system, or the immune state. The current literature indicates a significant role of psychological stress in the pathogenesis of asthma, Alzheimer’s disease, and cancer development, although due to the complex relevant determinants of these diseases, it is difficult to assess the exact role of stress in their etiology [6,7,8,9]. In addition to triggering the onset of disease, stress modifies the amount and type of food intake, which has been observed both in humans and animal models [10]. The occurrence of some of such changes in food preferences is explained by the influence of corticosterone produced by the adrenal glands [11]. Although the effect of stress on food preferences and the amount and type of consumed food varies, stress can exacerbate the desire to eat palatable foods, rich in fats and sugars, referred to as comfort food [12,13]. Due to the high prevalence of negative stress-related effects in people, substances with beneficial effects on health are being sought to minimize the negative consequences of stress. Hence, it has been proven that different types of food, e.g., coconut oil and mung beans have potential anti-stress values [14,15].
Bee pollen, a product obtained from honeybees, is a very complex compound consisting of approximately two hundred different substances. Although bee pollen composition differs depending on the species from it which originates, all major components, such as proteins, lipids, carbohydrates, or vitamins, and bio-elements are present in bee pollen independently of the origin [16]. When analyzing the percentage share of each group of compounds, it is noticeable that the largest part of it is carbohydrates, the content of which reaches about 30 percent. The average protein content in bee pollen oscillates to and from approximately 20 percent, of which essential amino acids represent a significant proportion. Among the other components of bee pollen are nucleic acids, lipids, and crude fiber [16,17]. In addition, bee pollen is a source of numerous macro- and microelements with a particularly significant content of potassium, iron, phosphorus, magnesium, and zinc [17]. Moreover, its composition is characterized by the presence of practically all vitamins including provitamin A, vitamin E, thiamine, niacin, pantothenic, nicotinic, and folic acid. Among the bioactive substances contained in bee pollen, the presence of phenolic compounds is worthy of note, since flavonoids, which make up a significant proportion of that group of compounds, are responsible for the antioxidant properties of bee pollen [17,18]. Such a rich composition of bee pollen is responsible for a number of the known health benefits of this substance, including hypolipidemic and glucose-ameliorating activities, as well as detoxifying and anti-inflammatory action [17,19]. All of these nutritional properties make bee pollen a valuable functional food able to enrich the diet [19,20].
Whey protein is a substance representing a significant proportion of the proteins contained in cow milk. It is processed to produce preparations such as whey protein concentrate (WPC), whey protein isolate (WPI), or whey protein hydrolyzed (WPH) with varying protein contents [21]. Regardless of the processing route, whey protein is primarily a rich source of β-lactoglobulin and α-lactalbumin. In addition, it is characterized by a content of ingredients such as essential amino acids, branched-chain amino acids, immunoglobulins, and lactoferrin [22,23]. Currently, whey protein is widely used as a supplement among athletes due to its beneficial effects on muscles [23,24]. Its anti-inflammatory, cardioprotective, neuroprotective, and anti-cancer properties also support its role as a functional food [23,24,25].
Although to date, there are no studies on the anti-stress potential of bee pollen, there are reports on the effect on stress of similar products such as propolis and royal jelly, suggesting a potential anti-stress effect in different animals [26,27,28,29,30]. Furthermore, a protein-rich diet is also known for its potential anti-stress properties [31]. In view of these reports, we have investigated whether there is a possibility that bee pollen and whey protein supplementation may influence histological properties, the function of adrenal glands, and thus also the response to stress [32].
## 2.1. Study Protocol
Thirty eight-week-old male Wistar rats were divided into six equal groups (five rats per group). Non-supplemented groups (No. I and No. II), also referred to as the control groups, included a non-running group (No. I) and a running group (No. II). The experimental groups (No. III–VI), understood as the supplemented ones, were non-running (No. III–IV) supplemented with whey protein (No. III) or bee pollen (No. IV), as well as running (No. V–VI) supplemented with whey protein (No. V) or bee pollen (No. VI) (Figure 1). During the 8 weeks of the experimental phase, all of the animals received water and rodent food ad libitum; the bee-pollen-supplemented group also received bee pollen and the whey-protein-supplemented group also received enriched whey protein concentrate (Olimp Laboratories Sp. z.o.o., Dębica, Poland). The daily rodent food, bee pollen, whey protein, and water consumption were measured each day. During the experimental phase, the rats in the running groups ran five times per week, with the duration of a single run being 5 min, on a treadmill built by us before starting the experiment. The average velocity was 6 km/h and the rats were not assisted by electrical shock. The rats from the non-running groups did not use the treadmill. At the beginning of the experiment, the body mass of the rats was approximately 330 g, while at the end of the experiment, it increased to approximately 400 g, regardless of the group. At the end of the experimental phase, all of the rats were decapitated and their adrenal glands were collected. Immediately after collection, the adrenal gland mass was measured with a digital analytical balance with 0.1 mg readability AS 110.R2 (Radwag, Lublin, Poland). After fixation in formalin, paraffin blocks were prepared.
The study protocol was approved by the Bioethical Committee at the Medical University of Lublin (No. $\frac{24}{2015}$).
## 2.2. Supplements
The bee pollen was collected in the vicinity of Lublin, Poland. It contained approximately 31 g of carbohydrates, 23 g of protein, 5 g of lipids, and 0.8 g of various vitamins (A, E, D, B1, B2, B3, B5, B6, B7, and C) per 100 g [16]. The 100 g of enriched whey protein concentrate (further called either whey protein or WPC) contained 77 g of protein, 6 g of carbohydrates, and 7 g of lipids, as reported previously [33].
## 2.3. Histological Staining and Analysis
Five μm-thick slides were prepared and stained according to the standard H&E and Masson’s trichrome protocols. The slides were then analyzed under a light microscope. Olympus BX4 with a digital camera and CellSens software (Version 4.1. CS-EN-V4) were used for image capture. The measurement of vacuolization was performed in Fiji as previously described [34]. The vacuolization rate was calculated as a percentage of the area occupied by vacuoles to the total analyzed area in the particular cortex layer. The measurement of the extent of fibrosis was performed in Fiji, as reported previously [33].
## 2.4. ELISA
A corticosterone ELISA kit (Cayman Chemical, Ann Arbor, MI, USA) was used for the measurement of fecal corticosterone content. Fresh feces samples, collected during the second to last week of the experiment, were frozen at −80 °C immediately after collection. Prior to corticosterone measurement, the samples were dried in a heat cabinet at 30 °C for 2 h as proposed by L. Pihl and J. Hau [35]. Then, the samples were prepared according to the manufacturer’s instructions. Large particles were removed by shifting through a stainless steel mesh. Twenty mg of each sample was suspended in 1ml of methanol. Then, the samples were vortexed for 30 min and centrifuged for 20 min at 2500× g. The supernatant was transferred into clean a Eppendorf tube and diluted 1:50 in ELISA Buffer (Cayman Chemical, Ann Arbor, MI, USA). The samples were prepared in such a way were then used in ELISA according to the manufacturer’s protocol. The absorbance was measured with a plate reader Biotek Elx-800 (BioTek, Winooski, VT, USA).
A corticosterone ELISA Kit (R&D Systems, Minneapolis, MN, USA) was used for corticosterone measurement in urine. Each sample was run in duplicate. Fresh urine samples were collected from metabolic cages shortly prior to study termination and immediately frozen at −80 °C. Prior to analysis, the samples were first transferred to −20 °C and then completely thawed. They were centrifuged for 10 min at 18,000× g. The supernatant was collected and diluted 100-times with Calibrator Diluent RD5-43 supplied as part of the kit. An ELISA assay was prepared according to manufacturer’s instructions. The microplate was read with Biotek Elx-800. The raw data were analyzed with elisaanalysis.com (ElisaKit) in the case of urine and with a spreadsheet supplied by Cayman Chemicals in the case of feces. Each sample was run in duplicate.
## 2.5. Statistical Analysis
The collected data were statistically analyzed with Statistica 12 (StatSoft, St. Tulsa OK, USA). The distribution of the data was analyzed with the Shapiro–Wilk test. The statistical significance was calculated with the Kruskal–Wallis test, and the level of significance was set at $p \leq 0.05.$
## 3.1. Food Intake
The daily whey protein intake was similar in both the running (5.19 g per rat) and non-running (5.18 g per rat) groups, while the consumption of bee pollen varied with 13.45 g per rat in the non-running group and 11.96 g per rat in the running group. The detailed consumption and mass changes have been reported previously [33].
## 3.2. Adrenal Gland Mass
No statistically significant difference in adrenal gland mass was observed ($$p \leq 0.65$$). Specifically, no differences were noted between any of the experimental groups or their respective control groups. The two controls also did not differ from one another either.
## 3.3. Vacuoles and Structure
No significant changes in adrenal architecture were noted under the optical microscope after H&E staining at 400× magnification. In addition, no statistically significant changes in the contribution of lipid droplets to the total selected area within the different layers of the cortex were observed but the experimental groups supplemented with bee pollen exhibited a slight decrease in the vacuolization of both zona glomerulosa and zona fasciculata (Table 1).
The initial evaluation revealed possible differences in the diameter of the nuclei which was later confirmed by detailed measurements (Table 2). Next, the sinusoids were assessed. The differences in the sinusoid width between the groups were statistically significant (Table 2). Sinusoid epithelium thickness was decreased in both of the non-running experimental groups in comparison to the non-running control group. On the contrary, there was a tendency to increase the sinusoid epithelium thickness in the supplemented running groups compared to the running control (Table 2). The sinusoid epithelial cell nucleus diameter tended to increase in the bee-pollen-supplemented non-running group compared to the non-running control group. The capsule thickness increased in all four experimental groups in comparison to the control groups (Table 2).
## 3.4. Fibrosis
Neither visual evaluation nor computed analysis revealed significant fibrosis in any of the groups except for the bee-pollen-supplemented running group. In the latter, mild fibrosis was noted in all of the layers of the adrenal cortex. Additionally, the results of the computed analysis indicated a decrease in collagen fibers in the bee-pollen-supplemented non-running group in comparison to the non-running control group (Table 3).
## 3.5. Corticosterone Production
The urine corticosterone concentration significantly differed in all four experimental groups—both of the tested not-running groups (III—whey-protein-supplemented and IV—bee-pollen-supplemented) exhibited values lower than the non-running control, while both of the tested running groups (V—whey-protein-supplemented and VI—bee-pollen-supplemented) scored higher than the running control (Table 4). Moreover, in the whey-protein-supplemented groups, both in the running and non-running groups, lower values of urinary corticosterone level were observed in comparison to the bee-pollen-supplemented groups (Table 4). Furthermore, the non-running control group showed higher urinary corticosterone levels than the running control group (Table 4). No statistically significant changes in feces corticosterone concentration and total daily corticosterone excretion were noted (Table 4).
## 3.6. Pyknotic Nuclei
In all zones of the adrenal gland cortex, statistically significant changes in the percentage of pyknotic nuclei were observed in particular groups. In the non-running experimental groups, supplementation with both bee pollen and whey protein resulted in a lower mean percentage of the pyknotic nuclei in the cells in all of the adrenal cortex zones in comparison to the non-running control group (Table 5), (Figure 2).
## 4. Discussion
To the best of the authors’ knowledge, this is the first study on the effects of bee pollen and whey protein supplementation on adrenal function and structure. The impact of stress on eating behavior is a very complex phenomenon. Depending on the severity of stress and its duration, changes observed in dietary habits are substantially different. The efficient regulation of the whole process especially in the case of chronic stress is provided by the comprehensive action of the hypothalamic–pituitary–adrenal (HPA) axis. Stress affects both the amount and the tendency to eat specific foods as observed both in human and animal models [10,36,37]. While acute stress usually results in reducing food intake, exposure to chronic stressful stimuli leads to an increased desire to consume food. In addition, the regulation of food intake is influenced by the reward center, which in response to palatable food consumption decreases the activity of the HPA axis, resulting in the suppression of the stress response. It may explain why rats in stress are usually more likely to consume sugar-rich comfort products [38,39].
The nutritional trends in rats observed in the current study indicated no change in whey protein intake between the running and non-running groups of rats, whereas the difference in bee pollen intake between these groups was statistically significant. Rats in the non-running group tended to consume more bee pollen compared to the running rats. When analyzing the possible reasons for these observations, it is worth considering the differences in composition in these two groups of palatable food. Bee pollen has a much higher carbohydrate content than whey protein and it has been already reported that rats in stressful situations increase their intake of sugar-rich foods [39].
Consistent with the aforementioned findings, the body’s response to stress is inextricably linked to the functioning of the HPA axis. Especially adrenals, due to their high plasticity, are most significantly modified morphologically and functionally in response to stress [40]. Hence, this is why we decided to evaluate the adrenal glands for morphologic and functional aspects.
In the current study, no changes in the adrenal glands’ weight were observed between the groups. Our findings are consistent with several previous publications evaluating the effects of stress on adrenal mass. Marin et al., [ 2007] did not observe any changes in adrenal glands’ weight after exposure to stress or chronic restraint [41]. Similarly, it has previously been reported that exposure to chronic variable stress also did not induce changes in adrenal mass [42]. On the other hand, Díaz-Aguila et al., [ 2018] reported that the combined effect of stress and a high-sucrose diet on the rats caused an increase in the mass of the right adrenal gland. However, this study showed that these two variables separately did not affect adrenal weight which is partly consistent with our results [43]. Nonetheless, there are also reports contrasting with the above conclusions, indicating that stress can increase adrenal weight in rats [44,45,46]. Such an increase in adrenal weight may be due to stimulation of the adrenal glands by ACTH resulting in hypertrophy of the organ [46].
Exposure to stress in rats may lead to changes in the thickness of the adrenal cortex layers—both thinning and thickening are possible [43,47,48]. However, in the present study, microscopic evaluation of adrenal cortex layers in rats in particular groups did not show statistically significant differences.
We found out that the differences in the mean diameter of the cell nuclei in all of the cortex layers and in the medulla between all of the groups were statistically significant. Moreover, we observed a noticeable tendency to increase in the mean diameter of the cell nuclei in the zona glomerulosa and zona fasciculata both in the non-running supplemented groups compared to the non-running control as in the running supplemented groups compared to running control. Increased nucleus diameter may be a sign of increased protein synthesis and may also be related to increased hormonal secretion [49]. The findings previously o0btained from rat models have indicated that changes in nuclei structure are the result of experiencing stress. In a study conducted by Zaki et al., [ 2018], stress resulted in increased nuclear pyknosis [48] and our results are in line with these statements as we noticed that supplementation with our comfort foods (whey protein and bee pollen) lowered the number of pyknotic nuclei. Similarly, recent experiments have shown the protective effects of whey protein on hepatic cell nuclei [50] and propolis on olfactory bulb [51].
Although there are reports that acute stress induces adrenal fibrosis, in our experiment there is no evidence thereof [52]. Perhaps this is due to the fact that in our experiment, the stress situation was chronic. Nevertheless, we found mild fibrosis of all layers of the adrenal cortex in the group of running rats supplemented with bee pollen. Additionally, computed analysis showed that bee pollen supplementation in non-running groups resulted in a decrease in collagen fibers in comparison to non-running control group. Our results suggest a potential fibrosis-reducing effect exerted by bee pollen. This has been confirmed by previous reports suggesting that another bee product—bee bread—has the ability to reduce liver fibrosis induced by a high-fat diet [53].
Further microscopic analysis also provided observations on the degree of vacuolization of individual layers of the adrenal cortex in each group. The tendency to decrease the level of vacuolization accompanying reduced corticosterone levels according to Koko et al., [ 2004], can be the result of exposure to short-term stress [52]. The measurement of corticosterone levels in urine and feces is a sensitive marker reflecting the adrenal condition and the level of stress in the body of rats [54]. In the present study, significant differences in corticosterone concentrations were observed only in urine measurements, whereas no differences were found in measurements from stool samples. It has been proven previously that corticosterone concentrations in urine samples reflect the diurnal secretion profile of the hormone [55]. Thus, analysis of changes in urinary corticosterone levels captures a likely picture of stress levels in rats. The differences in the urinary corticosterone concentrations observed in the current study indicate that additional movement was effective in minimizing stress in rats. These results are in line with the reports that restriction of movement in rats caused an increase in blood corticosterone levels [56]. Similarly, our results corroborate those of a previous study in which immobilization and restriction to small space resulted in an increase of urinary corticosterone levels [57]. In addition, Premack et al., [ 1963], showed that depriving rats of their daily activities leads to an increase in food intake, confirming our previously analyzed model of stress as an inducer of changes in food intake [58].
Moreover, the results of the present study show that, compared to the non-running control group, the supplemented non-running rats had a reduction in urinary corticosterone excretion. This suggests that stress was decreased by both bee pollen and whey protein consumption by rats. The present findings corroborate previous research on the impact of high-protein high-carbohydrate comfort food on the level of stress in animals [31]. Previous studies have shown that royal jelly (another bee product) has the ability to lower plasma corticosterone levels [29,30]. Additionally, an experiment conducted by Teixeira et al., [ 2017] using royal jelly showed that this product has the ability to reduce corticosterone levels in rats also when they are not under stress [30]. Similarly, an experiment conducted on broilers proved that propolis, another product of bee origin, attenuates the endocrine component of the stress response by lowering corticosterone levels in broilers that were kept in stressful conditions [28]. Thus, the common anti-stress effect of bee products may be based on the activity of one of the proteins contained in them capable of inhibiting cholesterol synthesis and consequently inhibiting corticosterone synthesis [59]. Furthermore, based on the fact that bee pollen has a high carbohydrate content and that comfort food with a similar percentage carbohydrate content is able to lower serum corticosterone concentrations, we can speculate whether this contributed to our results [31,60,61]. On the other hand, the potential anti-stress effect achieved by carbohydrate-rich foods consumption is not supported by the results of another study conducted by Zeeni et al., [ 2015], in which two types of diets—high carbohydrate and high carbohydrate enriched with highly palatable products—were used in stressed rats. Indeed, rats supplemented with the latter one showed significantly lower serum corticosterone levels than rats eating only a carbohydrate-rich diet [44]. It can be concluded that not only the use of a high-carbohydrate diet but also its additional enrichment was responsible for the reduction of corticosterone concentrations. Since in the current study the intake of whey protein resulted in lower levels of hormone excreted in urine in comparison with bee pollen supplementation, we speculate that whey protein consumption is more effective in affecting adrenal function. Indeed, it has previously been reported that the production of corticosterone in rats is at least partially regulated by diet protein intake [62,63]. Additionally, Makkar et al., [ 2016] found that supplementation with $0.5\%$ whey protein caused a noticeable decrease in serum corticosterone levels in poultry [64] and similarly, Greco et al., [ 1982] observed that a high protein diet caused a decrease in serum corticosterone levels in rats [65]. Nonetheless, to understand the potential properties of whey protein on adrenal function, we need to look at its individual bioactive fractions such as lactoferrin and lactalbumin [22]. Maekawa et al., [ 2017] found that intraperitoneal administration of bovine lactoferrin to rats resulted in a decrease in serum corticosterone levels [66]. Furthermore, another prior experiment showed similar effects to lactoferrin administration [67].
Although our study is, to the best of our knowledge, the first one evaluating the impact of bee pollen and whey protein supplementation on adrenal histology and function, we are aware of its several limitations. First, we regard small sample sizes as a substantial restriction factor. Moreover, urine samples were collected while the rats were in metabolic cages. Although they were held in them for only 24 h, they might have been significantly stressed due to new housing and solitude. Additionally, the implementation of other stress sources in rats will allow us to draw more reliable results. What is equally important, the entire concept of the study topic followed the growing interest of functional foods consumption among people, as well as frequent experience of stress. Hence, in order to verify the effects of these substances on human organisms, future studies should be conducted on this target group.
## 5. Conclusions
The histological structure and functioning of the adrenal glands are a reflection of the impact of stress on the body. Application of exogenous factors including diet enrichment may modify some observed stress-derived changes. We noticed that bee pollen and whey protein have a significant effect on the reduction of urine corticosterone concentrations, which strongly supports their anti-stress value. Additionally, the observed decrease in the percentage of pyknotic nuclei in particular layers in both of the non-running supplemented groups in comparison to the non-running control may suggest the protective and beneficial effects of bee pollen and whey protein consumption on adrenal glands. Overall, the intake of bee pollen and whey protein seems to have limited but promising potential for stress reduction. Nevertheless, as we mentioned above, since the primary objective is to prove the effects of these substances on the human body, there is a need for further experiments. |
# Gut Microbiota Dysbiosis Ameliorates in LNK-Deficient Mouse Models with Obesity-Induced Insulin Resistance Improvement
## Abstract
Purpose: To investigate the potential role of gut microbiota in obesity-induced insulin resistance (IR). Methods: Four-week-old male C57BL/6 wild-type mice ($$n = 6$$) and whole-body SH2 domain-containing adaptor protein (LNK)-deficient in C57BL/6 genetic backgrounds mice ($$n = 7$$) were fed with a high-fat diet (HFD, $60\%$ calories from fat) for 16 weeks. The gut microbiota of 13 mice feces samples was analyzed by using a 16 s rRNA sequencing analysis. Results: The structure and composition of the gut microbiota community of WT mice were significantly different from those in the LNK-/- group. The abundance of the lipopolysaccharide (LPS)-producing genus Proteobacteria was increased in WT mice, while some short-chain fatty acid (SCFA)-producing genera in WT groups were significantly lower than in LNK-/- groups ($p \leq 0.05$). Conclusions: The structure and composition of the intestinal microbiota community of obese WT mice were significantly different from those in the LNK-/- group. The abnormality of the gut microbial structure and composition might interfere with glucolipid metabolism and exacerbate obesity-induced IR by increasing LPS-producing genera while reducing SCFA-producing probiotics.
## 1. Introduction
Obesity is becoming a worldwide health risk factor, and obesity-induced morbidity and complications account for huge costs for affected individuals, families, healthcare systems, and society at large. Obesity is a low-grade sustained inflammatory state that alters the whole-body metabolism that frequently leads to insulin resistance (IR) [1], which in turn plays a vital role in the pathogenesis of obesity-associated hyperlipidemia, non-alcoholic fatty liver disease, polycystic ovary syndrome, type 2 diabetes, and atherosclerotic cardiovascular disease [2]. Nutrients and substrates as well as systems involved in host–nutrient interactions, including gut microbiota, have been also identified as modulators of metabolic pathways controlling insulin action and obesity regulation [3]. However, the molecular mechanism of IR has not been exactly clarified.
Gut microbiota is the general term for the microbes that inhabit the gastrointestinal tract of the human body. Around 98–$99\%$ of the intestinal microbiomes can be classified into four groups: Bacteroidetes, Firmicutes, Proteobacteria, and Actinomycetes. The balance of intestinal microbe species is the key to keeping the intestinal immune function normal and maintaining the homeostasis of the body. Breaking the balance will lead to serious pathophysiological changes, which is called gut microbiota dysbiosis [4]. Increasing studies showed that Bacteroides are associated with high-fat and high-protein diets [5] and the imbalance of intestinal microecology might be involved in the occurrence of many diseases, such as irritable bowel syndrome, obesity, type 2 diabetes, metabolic syndrome (MetS), and cardiovascular diseases [6,7,8].
Metagenomic sequencing and 16S RNA sequencing were used to detect the changes in intestinal microbiota in patients with prediabetes, type 2 diabetes, and MetS. Two studies found that although the races and their diets were different, in type 2 diabetes patients, the proportion of Clostridium butyrate-producing *Roche fusobacterium* and *Clostridium leptum* decreased while the proportion of non-*Clostridium butyrate* increased [9,10]. The levels of Firmicutes and Clostridia in the gut microbiota of type 2 diabetes patients were significantly decreased as compared to normal controls, and the ratio of Bacteroidetes to Firmicutes was increased and positively correlated with blood glucose concentrations [11]. There are changes in the intestinal microbiota in people with abnormal glucose metabolism, and the changes in the intestinal microbiota also seem to be involved in the occurrence and remission of abnormal glucose metabolism. It was reported that feces from mice with abnormal glucose metabolism transplanted into healthy germ-free mice could cause abnormal glucose metabolism [12]. Furthermore, transplanting feces from lean donors into patients with MetS could increase their gut microbiota diversity and insulin sensitivity [13]. The results above suggested that the intestinal microbiota are closely related to the occurrence and development of abnormal glucose metabolism, while IR, as an important link in the occurrence and development of abnormal glucose metabolism, also seems to be related to the intestinal microbiota.
Our previous study discovered that ovarian tissues from PCOS patients with IR exhibited higher expression of the SH2 domain-containing adaptor protein (LNK) than ovaries from normal control subjects and PCOS patients without IR [14]. In addition, we found that there were more accumulated intrahepatic triglyceride, higher serum triglyceride (TG), and free fatty acid (FFA) in wild-type (WT) mice as compared to LNK-deficient (LNK-/-) mice fed with a high-fat diet (HFD). LNK deficiency improved glucose metabolism and IR in obese mice, suggesting the LNK might play a pivotal role in controlling glucolipid metabolism and obesity-induced IR by regulating IRS1/PI3K/Akt/AS160 signaling and the AKT/FOXO3 pathway [15,16]. Therefore, we chose LNK-/- mice as the IR-improved model and WT mice as the MetS/IR model. In this study, we compared intestinal microbiota of LNK-/- mice and WT mice that consumed HFD, with the aim to explore the potential influence of gut microbiomes on the glucolipid metabolic disorder and obesity-induced IR.
## 2.1. Animals
The study protocol was approved by the Research Ethics Board of Sun Yat-sen memorial hospital of Sun Yat-sen University and Guangdong Provincial People’s Hospital. All the experimental procedures were approved by the Committee for Animal Research of Sun Yat-sen University and the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals.
Four-week-old male C57BL/6 wild-type mice ($$n = 6$$) were purchased from the animal research center of Sun Yat-sen University. Whole-body LNK-deficient in C57BL/6 genetic backgrounds mice ($$n = 7$$) were created via CRISPR/Cas mediated genome engineering by Cyagen Biosciences Inc. The mouse Sh2b3 gene (GenBank accession number: NM_001306127.1; Ensembl: ENSMUSG00000042594) is located on mouse chromosome 5. Exon 1 to exon 3 were selected as target sites. Cas9 mRNA and gRNA generated using an in vitro transcription were then injected into fertilized eggs for knockout mouse production. All mice were randomly divided into different groups, housed 4 to 5 per cage, with standard laboratory conditions (12 h light:12 h darkness cycle) at a controlled temperature (23 ± 2 °C) and free access to rodent feed and water. All mice (4–5 weeks old) were fed a high-fat diet (HFD, $60\%$ calories from fat, D12492; Research Diets Inc., New Brunswick, NJ, USA) for 16 weeks.
## 2.2. Sample Collection
When mice were fed with a HFD for up to 16 weeks, fecal samples were collected and immediately kept frozen at −80 °C until processed for analysis. Total DNA was isolated from the fecal samples using the MasterPure Complete DNA&RNA Purification Kit (Epicenter) according to the manufacturer’s instructions with some modifications as described previously [17].
## 2.3. 16S rRNA Extraction and Sequencing
DNA was extracted using a DNA extraction kit for the corresponding sample. The concentration and purity were measured using the NanoDrop One (Thermo Fisher Scientific, Waltham, MA, USA). Next, 16S rRNA/18SrRNA/ITS genes of distinct regions (e.g., Bac 16S: V3-V4/V4/V4-V5; Fug 18S: V4/V5; ITS1/ITS2; Arc 16S: V4-V5 et al.) were amplified used specific primer (e.g., 16S: 338F and 806R/515F and 806R/515F and 907R; 18S: 528F and 706R/817F and 1196R; ITS5-1737F and ITS2-2043R/ITS3-F and ITS4R; Arc: Arch519F and Arch915R et al.) with a 12bp barcode. Primers were synthesized by Invitrogen (Invitrogen, Carlsbad, CA, USA). PCR reactions, containing 25 μL 2× Premix Taq (Takara Biotechnology, Dalian Co. Ltd., Dalian, China), 1 μL each primer (10 μM), and 3 μL DNA (20 ng/μL) template in a volume of 50 µL, were amplified via thermocycling: 5 min at 94 °C for initialization; 30 cycles of 30 s denaturation at 94 °C, 30 s annealing at 52 °C, and 30 s extension at 72 °C; followed by 10 min final elongation at 72 °C. The PCR instrument was BioRad S1000 (Bio-Rad Laboratory, Hercules, CA, USA). The length and concentration of the PCR product were detected via $1\%$ agarose gel electrophoresis. Samples with the bright main strip between (e.g., 16S V4: 290–310 bp/16S V4V5: 400–450 bp et al.) could be used for further experiments. PCR products were mixed in equidensity ratios according to the GeneTools Analysis Software (Version 4.03.05.0, SynGene, Cambridge, UK). Then, the mixture of PCR products was purified with E.Z.N.A. Gel Extraction Kit (Omega, Bellevue, WA, USA). Next, sequencing libraries were generated using NEBNext® Ultra™ II DNA Library Prep Kit for Illumina® (New England Biolabs, Ipswich, MA, USA) following the manufacturer’s recommendations, and index codes were added. The library quality was assessed on the Qubit@ 2.0 Fluorometer (Thermo Fisher Scientific, Waltham, MA, USA). At last, the library was sequenced on an Illumina Nova6000 platform and 250 bp paired-end reads were generated.
## 2.4. Data Analysis
Fastp (version 0.14.1) was used to control the quality of the raw data by sliding the window (-W 4 -M 20). The primers were removed by using cutadapt software according to the primer information at the beginning and end of the sequence to obtain the paired-end clean reads. Paired-end clean reads were merged using usearch -fastq_mergepairs (V10) according to the relationship of the overlap between the paired-end reads; when with at least a 16 bp overlap, the read generated from the opposite end of the same DNA fragment, the maximum mismatch allowed in the overlap region was 5 bp, and the spliced sequences were called Raw Tags. Fastp (version 0.14.1) was used to control the quality of the raw data by sliding the window (-W 4 -M 20) to obtain the paired-end clean tags.
R software was used to count the union (pan) and intersection (core) of the target classification level in different samples to evaluate whether the sample size was sufficient. R software was used to analyze the common and endemic species, the composition of the community, and the richness of species.
## 3.1. Diversity Difference of Intestinal Microbiota between LNK-/- and WT Mice
A total of 13 mice (7 LNK-/-mice and 6 WT mice) were included in this study. The average body weights of 0W LNK-/-mice and WT mice were 21 g ± 2.2 g and 21.1 g ± 2.1 g, respectively, with no significance ($p \leq 0.05$). During the process of the mice fed with HFD, we observed that LNK-/- mice had a loss of appetite compared with WT mice. The food intakes of LNK-/- mice and WT mice were 18.8 g ± 1.3 g and 19.4 g ± 1.8 g, respectively, with statistical significance ($p \leq 0.05$). After 16 weeks, there was a significant difference in body weight between LNK-/- mice (47.5 g ± 4.6 g) and WT mice (52.6 g ± 3.3 g) ($p \leq 0.05$). All thirteen feces samples from seven LNK-/-mice and six WT mice were analyzed. A majority of intestinal microbe species of LNK-/-mice and WT mice were similar, however, the diversity of gut microbiomes in the WT mice group was less than that of the LNK-/- mice group (Figure 1A). The α diversity of the gut microbiota calculated using the Shannon index showed that the LNK-/- group species diversity was higher than that of the WT group at the phylum level ($p \leq 0.05$, t test) (Figure 1B,C).
## 3.2. Composition and Abundance Difference of Intestinal Microbiota between LNK-/- and WT Mice
To compare the composition difference in the intestinal microbiota between LNK-/- and WT mice, we next performed a Bray–Curtis-based principal coordinates analysis (PCoA) (Figure 2A). It was shown that the degree of similarity between the two groups of microbial communities was significantly different (Bray–Curtis PERMANOVA, $$p \leq 0.016$$). In addition, the composition of the microbiota in the samples of the LNK-/- group was more heterogeneous and significantly different from that of the WT group. The heat map showed that gut microbiota compositions between the LNK-/- and WT groups were markedly different (Figure 2B).
In the phylum-level taxonomy classification, the WT group was dominated by Proteobacteria, Verrucomicrobia, and Bacteroidetes; the LNK-/- group was dominated by Bacteroidetes, Proteobacteria, and Firmicutes (Figure 2C). Although bacteria are similar at the phylum level between the two groups, Figure 2C showed that their proportion was different. The WT group was dominated by Proteobacteria and had a relative abundance of Verrucomicrobia, with the significance compared with LNK-/- mice ($p \leq 0.05$) (Figure 2D), while the LNK-/- group has a relatively large proportion of Firmicutes ($p \leq 0.05$) and Bacteroidetes (Figure 2D).
According to the results of the linear discriminant analysis effect size (LEfSe) (LDA ≥ 2.0), the abundances of Proteobacteria, Helicobacteraceae, Epsilonproteobacteria, and Campylobacterales were significantly increased in WT mice, while the abundance of Erysipelotrichales, Allobaculum, and Bacteroidales was significantly increased in LNK-/- mice (Figure 2E).
To explore the gut microbial differences between LNK-/- and WT mice further, we used STAMP software to analyze the genera with significant differences ($p \leq 0.05$). We found that the abundances of some short-chain fatty acid (SCFA)-producing genera in the WT groups were significantly lower than in the LNK-/- groups, such as Prevotella_9, Prevotellaceae_UCG-001, Clostridium_sensu_strict_1, Ruminococcaceae_UCG-010, and Stenotrophomonas (Figure 2F).
## 4. Discussion
Our previous study showed that upon the consumption of HFD, LNK-/- mice had a loss of appetite, and WT mice accumulated more intrahepatic triglyceride, TG, and FFA compared with LNK-/- mice. LNK plays a pivotal role in adipose glucose transport by regulating insulin-mediated IRS1/PI3K/Akt/AS160 signaling. In this study, we found that the abundance of Proteobacteria was significantly increased in the WT mice group, which was one of the main LPS-producing bacteria. Some SCFA-producing genera in WT groups were significantly lower than in the LNK-/- groups.
LPS is also called endotoxin. The complex of LPS and its receptor CD14 can be recognized by Toll-like receptor 4 (TLR4) on the surface of immune cells to induce an inflammatory response. When the change in diet or the use of antibiotics affects the balance of gut microbiota, the number of harmful bacteria such as G- bacteria increases, and the decomposed product LPS passes into the blood circulation through the intestinal epithelium to cause endotoxemia, which triggers a systemic inflammatory response [18]. This study revealed that inflammation and LPS levels were elevated in patients with type 2 diabetes. Both animal and human experiments have demonstrated that the direct injection of LPS can increase fasting blood glucose and insulin levels, resulting in hyperinsulinemia and insulin resistance. When the number of G- bacteria decreased with the use of antibiotics, the amount of LPS entering the circulation decreased, which could relieve the systemic inflammation and increase insulin sensitivity. LPS receptor CD14 knockout mice fed a high-fat diet or injected with LPS had decreased inflammatory factors in adipose tissue, increased insulin sensitivity in liver and adipose tissue, and had a delayed development of insulin resistance, and their weight gain slowed down [19,20,21,22]. The results suggest that LPS plays an important role in the induction of the inflammatory response and insulin resistance.
The intestinal microbiota may affect the content of circulating LPS in the following two ways to induce insulin resistance. For one thing, the structure of intestinal microbiota is unbalanced, the number of G + bacteria is decreased, the proportion of G- bacteria is increased, and the production of LPS is increased. Studies have shown that the number of G + bacteria such as *Clostridium decreased* and the number of LPS-containing bacteria such as Bacteroides and Proteobacteria increased in diabetic patients. Adding Lactobacillus and Bifidobacterium to the diet of high-fat-induced obese mice could help restore a balance between probiotics and pernicious bacteria in the gut and increase insulin sensitivity. The addition of prebiotic oligosaccharides to a high-fat diet-induced diabetic mouse model also increased the number of bifidobacteria, decreased the level of LPS, and improved insulin secretion and inflammation, which was significantly associated with the number of bifidobacteria [23]. Additionally, intestinal microbiota alter intestinal permeability. Studies have shown that a high-fat diet may interact with intestinal microbiota, alter intestinal permeability, promote the rise of LPS levels, and cause an inflammatory state and insulin resistance [24,25]. The intestinal microbiota selectively regulates the expression of colonic Cannabinoid receptor 1, which affects intestinal permeability by altering the distribution of Claudin-1 [26]. In addition, obesity itself affects intestinal permeability. A study of normal-weight and overweight healthy women showed a positive correlation between gut permeability and waist circumference and visceral fat content [27]. Increased visceral adipose promotes the secretion of the pro-inflammatory factors TNF α, IL-1, and IL-6 by infiltrating macrophages in adipose tissue and reducing the production of the anti-inflammatory factor adiponectin. With the action of multiple pro-inflammatory factors, intestinal mucus production was decreased, and intestinal permeability was increased. TNF-α can also act on tight junction proteins, resulting in the increased permeability of the tight junction of intestinal cells [28,29,30]. These proinflammatory factors can also promote insulin resistance and lipid storage in adipocytes, thereby forming a vicious cycle.
Probiotics such as Bifidobacterium, Lactobacillus, and Prevotella_9 can promote the release of SCFAs from the undigested soluble dietary fiber in the colon via fermentation, at the same time reducing the intestinal pH, inhibiting the growth of harmful bacteria, to reduce the production of LPS in the intestinal lumen [31]. SCFAs can also promote the secretion of insulin by pancreatic β cells by regulating the secretion of gut-derived hormones, such as glucagon-like Peptide 1 (Glp-1), Glucagon Peptide 2 (Glp-2), Peptide YY (PYY), and glucose-dependent insulinotropic Peptide (GIP), etc., to increase insulin sensitivity and suppress appetite and food intake, thereby improving insulin resistance. After 8-week oral medication of VSL#3 probiotics containing 8 kinds of viable bacteria, the diet-induced obesity mice had increased GLP-1 production, decreased food intake, reduced body weight, and improved glucose tolerance. Their intestinal microbiota composition also changed the number of probiotics of Firmicutes such as lactobacillus, and Bifidobacterium increased, which was related to the increase in butyrate in SCFAs [32]. Butyrate can improve the function of the intestine, promote the activity of the intestine, and has a better therapeutic effect on patients with a loss of appetite, diarrhea, dyspepsia, and so on. In addition, butyrate can promote the reduction of dietary intake and digestion and is also beneficial to obese or fatty liver patients [32]. In addition, another study showed that healthy volunteers ate inulin-containing foods that promoted probiotic growth and a regular diet, respectively. Moreover, GLP-2 was found to be increased in fasting serum and decreased in intestinal permeability after eating inulin-containing foods [33]. The results demonstrated that probiotics could promote the production of SCFAs and the secretion of GLP-1 and Glp-2 by regulating the balance of intestinal microbiota, further improving intestinal permeability and alleviating IR.
Our research explored the changes in the gut microbiota in LNK-/- and ET mice, which provided new ideas for the mechanism and treatment of MetS and IR. Although previous studies had shown that the disorder of intestine microbiota was related to MetS, the underlying mechanism remains unclear. Therefore, this study was a supplement to this research field. Nevertheless, this study still had some shortcomings. Firstly, as is known to all, sex hormones strongly influence body fat distribution and adipocyte differentiation. Estrogen and testosterone differentially affect adipocyte physiology and estrogens play a leading role in the causes and consequences of female obesity. Therefore, in this study, to avoid the influence of estrogen on the occurrence of obesity, we did not put male and female mice together to compare, and only collected fecal samples based on previous obesity-induced IR male mouse models. The sample size was not large enough, and there may be bias in the results for female mice. The results of female mice and the potential effects of sex hormones on gut microbiota need further research. Secondly, in the study, we focused on the difference in gut microbiota between LNK-/- and WT mice. We will continue relevant studies, and the indexes such as LPS, butyrate, gut permeability, and mucosal structural changes will be measured or observed in our next study. The relationship between changes in gut microbiomes and IR needs to be confirmed by further experiments. Finally, the 16S rRNA gene sequencing had some limitations, such as a short reading length, sequencing errors, and difficulty in evaluating and operating taxa. It would be important to combine signaling pathways and metabolomics analysis in the next step.
## 5. Conclusions
Our research described that the structure and composition of the gut microbiota community between LNK-/- and WT mice were significantly different. The change in the gut microbial structure and composition of obese WT mice might aggravate glucolipid metabolic disorder and IR by increasing the production of LPS while reducing the production of SCFAs. |
# PPAR Pan Agonist MHY2013 Alleviates Renal Fibrosis in a Mouse Model by Reducing Fibroblast Activation and Epithelial Inflammation
## Abstract
The peroxisome proliferator-activated receptor (PPAR) nuclear receptor has been an interesting target for the treatment of chronic diseases. Although the efficacy of PPAR pan agonists in several metabolic diseases has been well studied, the effect of PPAR pan agonists on kidney fibrosis development has not been demonstrated. To evaluate the effect of the PPAR pan agonist MHY2013, a folic acid (FA)-induced in vivo kidney fibrosis model was used. MHY2013 treatment significantly controlled decline in kidney function, tubule dilation, and FA-induced kidney damage. The extent of fibrosis determined using biochemical and histological methods showed that MHY2013 effectively blocked the development of fibrosis. Pro-inflammatory responses, including cytokine and chemokine expression, inflammatory cell infiltration, and NF-κB activation, were all reduced with MHY2013 treatment. To demonstrate the anti-fibrotic and anti-inflammatory mechanisms of MHY2013, in vitro studies were conducted using NRK49F kidney fibroblasts and NRK52E kidney epithelial cells. In the NRK49F kidney fibroblasts, MHY2013 treatment significantly reduced TGF-β-induced fibroblast activation. *The* gene and protein expressions of collagen I and α-smooth muscle actin were significantly reduced with MHY2013 treatment. Using PPAR transfection, we found that PPARγ played a major role in blocking fibroblast activation. In addition, MHY2013 significantly reduced LPS-induced NF-κB activation and chemokine expression mainly through PPARβ activation. Taken together, our results suggest that administration of the PPAR pan agonist effectively prevented renal fibrosis in both in vitro and in vivo models of kidney fibrosis, implicating the therapeutic potential of PPAR agonists against chronic kidney diseases.
## 1. Introduction
Chronic kidney disease (CKD) affects approximately $10\%$ of the global population, with high mortality due to limited treatment options [1]. CKD often leads to end-stage renal disease, which is fatal without renal replacement therapy, such as dialysis or kidney transplantation. Kidney fibrosis is considered a major underlying pathological process that is commonly detected in CKD development [2]. Understanding the mechanisms of renal fibrosis is essential for developing therapies to prevent or slow CKD progression. Fibrosis is defined by the formation and accumulation of the extracellular matrix (ECM), mainly by tissue-resident fibroblast cells [3]. Under physiological conditions, minimal amounts of ECM support kidney structure and function. In response to tissue injury, wound-healing processes are activated to inhibit the inflammatory response with proper tissue regeneration. However, persistent inflammatory responses result in incomplete regeneration, with the formation of fibrotic scar tissue [4]. Exaggerated deposition of ECM during chronic and pathological fibrosis development disrupts the normal kidney architecture and interferes with kidney function. At a certain stage, unresolved kidney fibrosis becomes irreversible and contributes to renal failure.
The mechanisms underlying the development of kidney fibrosis have been studied extensively [5]. Regardless of the trigger, multiple cell types participate in fibrogenesis, including fibroblasts, pericytes, epithelial cells, endothelial cells, and inflammatory cells [6]. The main contributor to fibrosis progression is the accumulation of fibroblasts with a phenotypic appearance of myofibroblasts. During progressive fibrosis, the interstitium is filled with myofibroblasts, which produce large amounts of ECM proteins [6]. Although myofibroblasts are the executing cells of fibrosis, other cells also contribute to the development of fibrosis through both direct and indirect mechanisms. Pericytes, epithelial cells, and endothelial cells have been shown to directly contribute to fibrosis through the transition to mesenchymal-like cell types [7]. Epithelial cells also contribute to fibrosis through the secretion of pro-fibrogenic and pro-inflammatory factors, such as TGF-β, CTGF, and cytokines [8,9]. Considerable evidence suggests that inflammatory cells play a critical role in the initiation and progression of renal fibrosis [10,11]. The chemokine is mainly secreted from tubule epithelial cells during injury and recruits various inflammatory cell types, including monocytes, T cells, dendritic cells, and fibrocytes [9]. The infiltration of inflammatory cells is a major phenotype of kidney fibrosis that promotes fibrosis [12].
Peroxisome proliferator-activated receptors (PPARs), PPARα, PPARβ/δ, and PPARγ, play an essential role in the regulation of various physiological processes, including lipid and energy metabolism [13]. Fibrates (PPARα agonists) are used to treat dyslipidemia, and thiazolidinediones (PPARγ agonists) are used to increase insulin sensitivity in type 2 diabetics. In addition, PPAR dual agonists have been developed to treat type 2 diabetes with secondary cardiovascular complications [14,15]. Many synthetic ligands for PPARs are still under development to expand their therapeutic applications. In addition to their original roles in metabolism, PPAR agonists have been shown to exert various physiological effects. PPAR agonists have been reported to block the development of fibrosis in the liver, heart, kidneys, and lungs [16,17]. Furthermore, several studies have reported the anti-inflammatory action of peroxisome proliferator-activated receptor (PPAR) agonists [18].
Previously, we synthesized and evaluated the role of MHY2013, a potent PPAR pan-agonist, in several metabolic disease models [19,20]. In addition, MHY2013 showed anti-fibrotic effects in an age-related renal fibrosis model by regulating the lipid metabolism in epithelial cells [21]. However, the effects of MHY2013 on general aspects of renal fibrosis have not yet been investigated. In this study, we demonstrated the role and efficacy of MHY2013 in a general renal fibrosis model. Using mouse models of renal fibrosis induced by folic acid, we demonstrated the anti-fibrotic efficacy of PPAR pan agonism in renal fibrosis. MHY2013 treatment significantly reduced fibrosis and inflammation in a mouse model of renal fibrosis. In addition, using in vitro analysis, we found anti-fibrotic and anti-inflammatory effects of MHY2013 in renal fibroblasts and epithelial cells.
## 2.1. MHY2013 Reduces Folic-Acid-Induced Renal Damage and Tubule Dilation in Mice
To evaluate the anti-fibrotic effects of MHY2013, folic-acid-induced renal fibrosis models were used. MHY2013 was intraperitoneally administered at a low (0.5 mg/kg/day) or high dose (3 mg/kg/day) during the experimental period (Figure 1A). The MHY2013-treated group showed lower expression of kidney damage-related genes (Havcr1, Timp2, Igfbp7, and Spp1) than those of the FA-treated group (Figure 1B). Blood urea nitrogen (BUN) levels were increased in the folic acid (FA)-treated group, and high-dose MHY2013 treatment significantly blocked the FA-induced BUN increase (Figure 1C). Structural changes were analyzed with hematoxylin and eosin (H&E) staining. Tubule dilation and damage were detected in the cortex and medulla regions of FA-treated kidneys (Figure 1D). MHY2013-treated kidneys showed a smaller increase in tubule dilation (Figure 1D). These results indicate that MHY2013 has protective effects against folic-acid-induced kidney damage.
## 2.2. MHY2013 Suppresses FA-Induced Renal Fibrosis Development in Mice
We further analyzed the effects of MHY2013 on the development of renal fibrosis. The MHY2013-treated group showed lower expression of fibrosis-related genes (Col1a2, Col3a1, Vim) than that of the FA-treated group (Figure 2A). The increased expression of Col1a2 and Vim was confirmed with in situ hybridization (ISH) analysis. FA treatment significantly increased Col1a2 and Vim expression in the interstitial region of the kidney, and MHY2013-treated groups showed lower Col1a2 and Vim expression (Figure 2B,C). The protein levels of fibrosis markers were further checked. FA-induced α-SMA and collagen I levels were significantly decreased with MHY2013 treatment (Figure 3A). An immunohistochemical analysis confirmed that fewer αSMA-positive myofibroblasts were detected in the MHY2013-treated kidneys (Figure 3B). The extent of fibrosis was confirmed using Sirius Red (SR) staining. The FA treatment significantly increased SR-positive regions, whereas the MHY2013 treatment reduced SR-positive regions (Figure 3C,D). Finally, the activation of SMAD proteins was detected. Less SMAD2 and SMAD3 phosphorylation was detected in the MHY2013-treated groups than in the FA groups (Figure 3E). Collectively, these data indicate that MHY2013 effectively blocked FA-induced kidney fibrosis.
## 2.3. FA-Induced Inflammatory Responses Are Down-Regulated by MHY2013
The development of fibrosis is accompanied by pro-inflammatory responses. FA treatment also increases the inflammatory responses in the kidneys [22]. We further examined the inflammatory responses in animal models. MHY2013 treatment significantly reduced pro-inflammatory gene (Tnfa, Il1b, and Ccl2) expression and the macrophage marker Emr1 in the kidneys (Figure 4A). The activation of NF-κB, induced by FA, was effectively blocked with MHY2013 treatment (Figure 4B). Activated NF-κB was mainly detected in the epithelial cells of dilated tubules, and MHY2013 significantly reduced p-NF-κB expression in tubule cells (Figure 4C). Macrophage infiltration was confirmed using ISH analysis. Increased Emr1 expression was mainly detected in the interstitial region of FA-treated kidneys (Figure 4D). MHY2013-treated groups showed less macrophage infiltration in the kidneys (Figure 4D). We further detected the co-expression of Col1a2 and Emr1. In the FA group, Emr1- and Col1a2-positive cells were colocalized in the kidneys, indicating that inflammation is connected to fibrosis development (Figure 4D). In accordance with the qPCR results, the MHY2013-treated group showed lower Emr1 and Col1a2 expression in the kidney (Figure 4D). These results indicate that MHY2013 exerts anti-inflammatory effects against FA-induced kidney fibrosis.
## 2.4. MHY2013 Blocks TGF-β-Induced NRK49F Kidney Fibroblast Activation
To investigate the anti-fibrotic role of MHY2013 under in vitro conditions, we used kidney-derived fibroblast cells. First, we confirmed the activation of PPAR by MHY2013 in NRK49F kidney fibroblasts. MHY2013 significantly increased PPRE activity under PPARα, PPARβ, and PPARγ expression conditions, confirming MHY2013 as a PPAR pan agonist (Figure 5A–C). TGF-β treatment significantly increased Col1a2, Acta2, and Vim expression in NRK49F fibroblasts, and MHY2013 pre-treatment effectively blocked fibroblast activation (Figure 5D). The protein expression levels of α-SMA and Col1 were analyzed. MHY2013 treatment significantly reduced TGF-β-induced α-SMA and Col1 protein expression (Figure 5E). The increased expression of α-SMA was confirmed using immunofluorescence. TGF-β increased αSMA expression in cells, whereas MHY2013 reduced αSMA expression (Figure 5F). To examine which PPAR subtype influenced fibroblast activation, we overexpressed PPAR before TGF-β treatment. We found that PPARγ overexpression effectively blocked TGF-β-induced fibroblast activation (Figure 5G), whereas other PPAR subtypes did not show a significant reduction (data not shown). These results indicate that MHY2013 effectively blocks TGF-β-induced NRK49F kidney fibroblast activation, mainly through PPARγ activation.
## 2.5. MHY2013 Reduces LPS-Induced Chemokine Expression in NRK52E Kidney Epithelial Cells
To examine the anti-inflammatory effects of MHY2013 under in vitro conditions, kidney tubule epithelial cells were used. Stimulation of NRK52E cells with a lipopolysaccharide (LPS) significantly increased chemokine gene expression, and MHY2013 pretreatment effectively reduced their expression (Figure 6A). We further evaluated NF-κB activity using a luciferase assay. LPS treatment significantly increased NF-κB activity, whereas MHY2013 effectively blocked NF-κB activity (Figure 6B). Finally, to examine which PPAR subtype influences LPS-induced chemokine expression, we overexpressed PPAR before LPS treatment. We found that PPARβ overexpression effectively blocked LPS-induced chemokine expression (Figure 6C), whereas other PPAR subtypes did not show a significant reduction (data not shown). Collectively, these data show that MHY2013 reduces LPS-induced NF-κB activation and chemokine expression in renal epithelial cells, mainly through PPARβ activation.
## 3. Discussion
Renal fibrosis, which is generally accompanied by CKD progression, is defined by the loss of renal parenchymal cells and their substitution with ECM proteins. During fibrosis development, both the synthesis and degradation of ECM proteins occur via several intra- and extracellular events. When ECM protein synthesis exceeds degradation, excessive ECM accumulation results in fibrosis [23]. It is well established that various cell types directly and indirectly participate in fibrosis development. Resident fibroblasts are the main responsible cells for the synthesis of ECM proteins [3]. During fibrogenesis, fibroblasts receive signals from other cells and begin to proliferate and become myofibroblasts. Myofibroblasts produce large amounts of ECM proteins that primarily contribute to the pathogenesis of kidney fibrosis.
Transforming growth factor-β (TGF-β) is considered a key player of renal fibrosis by stimulating fibroblasts in the kidney, thus making it an interesting target for the treatment of fibrosis [24]. Indeed, anti-TGF-β treatments using neutralizing antibodies, inhibitors against the TGF-β receptor, or antisense oligonucleotides to TGF-β1 halt the progression of renal fibrosis development, suggesting its fibrotic role in CKD [25]. We found that MHY2013 significantly reduced TGF-β-induced fibroblast activation in vitro. MHY2013 effectively inhibits TGF-β-induced α-SMA and collagen I expression in fibroblasts. Several studies have reported that PPARγ activation blocks TGF-β-induced ECM production in fibroblasts. Wang et al. evaluated three PPARγ agonists (15d-PGJ2, troglitazone, and ciglitazone) and found that PPARγ activation directly inhibits TGF-β/SMAD signaling pathways and alleviates renal fibroblast activation, resulting in reduced ECM synthesis [26]. Another PPARγ agonist, pioglitazone, similarly prevents renal fibrosis by repressing the TGF-β signaling pathway [27]. MHY2013 also showed direct anti-fibrotic effects on fibroblasts. Using PPAR transfection, we found that PPARγ overexpression inhibits TGF-β-induced fibroblast activation. Based on these results, we concluded that MHY2013 directly reduces fibroblast activation through PPARγ activation.
Renal inflammation is a protective response induced during kidney injury, which eliminates the cause of injury and promotes tissue repair. However, unresolved inflammatory responses can promote abnormal fibrosis in the kidneys, leading to CKD [28]. During prolonged inflammation, bone-marrow-derived leukocytes, including neutrophils and macrophages, are the main players in kidney inflammation. The accumulation of these cells is a major feature of pro-inflammatory kidney disease. In addition to these cells, studies have also revealed the important role of locally activated kidney cells, such as tubular epithelial cells (TECs), mesangial cells, podocytes, and endothelial cells. During the development of interstitial fibrosis, TECs play an important role in initiating the inflammatory response [29]. Under damaged conditions, TECs actively participate in pro-inflammatory responses through chemokine production. Several lines of evidence suggest that chemokines produced from TECs are crucial for the recruitment of monocytes and macrophages [30]. Based on these observations, the regulation of epithelial inflammation has been an interesting target for modulating kidney inflammation and fibrosis.
Based on our finding that MHY2013 decreases inflammation in animal models, we further demonstrated its role in epithelial inflammation. MHY2013 significantly reduces NF-κB activation and chemokine production in epithelial cells. Furthermore, using PPAR subtype transfection, we found that PPARβ overexpression decreases chemokine production in epithelial cells. There is evidence that PPARβ exerts anti-inflammatory effects in kidney disease. PPARβ-null mice developed more severe ischemic renal injury with more severe tubule damage than wild-type mice [31]. A macrophage-specific PPARβ-deleted mouse model also showed impaired apoptotic cell clearance and reduced anti-inflammatory cytokine production [32]. These mice were much more likely to develop autoimmune kidney disease, a lupus-like autoimmune disease. In addition, several reports have demonstrated the anti-inflammatory role of PPARβ agonists in kidney disease. GW0742 has been shown to inhibit streptozotocin-induced diabetic nephropathy in mice by reducing inflammatory mediators, including MCP-1 and osteopontin [33]. Another study showed that PPARβ agonists reduced the incidence of hypertension, endothelial dysfunction, inflammation, and organ damage in lupus mice [34]. Collectively, the reduced inflammatory responses observed in our in vitro and in vivo experiments were associated with the PPARβ activation property of MHY2013.
## 4.1. Animal Studies
All animal experiments were approved by the Institutional Animal Care Committee of the Pusan National University (PNU-IACUC approval No. PNU-2022-3164) and performed according to the guidelines issued by Pusan National University. C57BL/6J mice were obtained from Hyochang Science (Daegu, Republic of Korea). To establish the renal fibrosis mouse model, male mice (7-week-old) were intraperitoneally injected with a single dose of folic acid (250 mg/kg dissolved in 0.3 M NaHCO3) or vehicle. For the MHY treatment groups, MHY2013 was intraperitoneally administered in low (0.5 mg/kg/day) or high doses (3 mg/kg/day) during the experimental period ($$n = 5$$~7). All mice were maintained at 23 ± 2 °C with a relative humidity of 60 ± $5\%$ and 12 h light/dark cycles. One week after the folic acid treatment, the mice were sacrificed using CO2 inhalation. Serum was collected for biochemical analyses. Kidneys were collected and then immediately frozen in liquid nitrogen. For long-term storage, kidney samples were moved to a −80 °C deep freezer. Part of kidneys was fixed in neutral-buffered formalin for histochemical experiments.
## 4.2. Cell Culture Experiments
NRK49F rat-kidney fibroblasts were purchased from ATCC (CRL-1570) and grown in Dulbecco’s modified Eagle’s medium (DMEM), supplemented with $10\%$ fetal bovine serum (FBS) and $1\%$ penicillin. All cells were incubated at $5\%$ CO2 and 37 °C in a water-saturated atmosphere. To determine the effect of MHY2013 on TGFβ-induced fibroblast activation and ECM production, a MHY2013 concentration with 10 μM was pre-treated 30 min before the TGFβ (10 ng/mL) treatment. Protein or RNA samples were collected 24 h after the TGF-β treatment to determine the effect of MHY2013. NRK52E rat-kidney epithelial cells were purchased from ATCC (CRL-1571) and grown in DMEM supplemented with $10\%$ FBS and $1\%$ penicillin. To determine the effect of MHY2013 on LPS-induced inflammation, a MHY2013 concentration of 10 μM was pre-treated 30 min before LPS (10 μg/mL) treatment. Protein and RNA samples were collected 1 h after LPS treatment to determine the effect of MHY2013. All cell culture experiments were performed at least 3 times per experiment.
## 4.3. Serum Biochemical Measurements
Serum samples were obtained using centrifugation at 3000 rpm for 20 min at 4 °C. Blood urea nitrogen (BUN) levels were measured using a commercial assay kit from Shinyang Diagnostics (SICDIA L-BUN, 1120171, Seoul, Republic of Korea) according to the manufacturer’s instructions.
## 4.4. Protein Extraction and Western Blot Analysis
Two different solutions were used to extract proteins: ProEXTM CETi protein extract solution (Translab, Daejeon, Republic of Korea) was used to extract protein from tissues, and RIPA buffer (#9806, Cell Signaling Technology, Danvers, MA, USA) was used to obtain the total protein from the cells. Both solutions contained protease inhibitor cocktails to prevent protein degradation and phosphate inhibitor to prevent dephosphorylation. Protein concentration was measured using a BCA reagent (Thermo Scientific, Waltham, MA, USA). Extracted proteins (5–20 μg of protein) were then mixed with 4× sample buffer (Cat#1610747, Bio-Rad, CA, USA) and boiled for 5 min. The proteins were then separated using sodium dodecyl sulfate-polyacrylamide gel electrophoresis and transferred to polyvinylidene difluoride membranes (Millipore, Burlington, MA, USA). The membranes were blocked in $5\%$ nonfat milk and washed with Tris-buffered saline-Tween buffer for 30 min. Specific primary antibodies (1:500 to 1:2000 dilution, Supplementary Table S1) were added to the membranes and incubated overnight at 4 °C. After three washes with the TBS-Tween buffer, the membranes were incubated with a horseradish peroxidase-conjugated anti-mouse, anti-rabbit, or anti-goat antibody (diluted 1:10,000) for 1 h at 25 °C. The resulting immunoblots were visualized using Western Bright Peroxide solution (Advansta, San Jose, CA, USA) and a ChemiDoc imaging system (Bio-Rad) according to the manufacturer’s instructions. All western blot analyses were performed at least 3 times per experiment.
## 4.5. RNA Extraction and qRT-PCR
Total RNA was prepared using a TRIzol reagent (Invitrogen, Carlsbad, CA, USA). Briefly, kidney tissues ($$n = 5$$~7) or cells ($$n = 3$$) were homogenized in the TRIzol reagent. To isolate RNA, 0.2 mL chloroform was added to the 1 mL homogenate and shaken vigorously for 15 min. The aqueous phases were transferred to fresh tubes, and an equal volume of isopropanol was added. The samples were then incubated at 4 °C for 15 min and centrifuged at 12,000× g for 15 min at 4 °C. The supernatants were removed, and the resulting RNA pellets were washed once with $75\%$ ethanol and then dried, followed by dissolving in diethyl pyrocarbonate-treated water. Next, 1.0 μg of isolated RNA was reverse-transcribed using a cDNA synthesis kit from GenDEPOT (Katy, TX, USA). qPCR was performed using a SYBR Green Master Mix (BIOLINE, Taunton, MA, USA) and a CFX Connect System (Bio-Rad). Primers were designed using Primer3Plus [35], and the primer sequences used are listed in Supplementary Table S2. For qPCR data analysis, the 2−ΔΔCT method was used as a relative quantification strategy.
## 4.6. Histological Analysis
To visualize histological changes in the kidneys, the kidneys were fixed in $10\%$ neutral formalin, and paraffin-embedded sections were stained with H&E. To assess the degree of renal fibrosis and damage, SR staining was performed using a commercially available kit (VB-3017; Rockville, MD, USA). This staining method is commonly used to visualize collagen fibers, which are a hallmark of fibrosis. Immunohistochemical analysis was performed to visualize the protein expression regions in the kidneys. Briefly, paraffin-embedded sections were deparaffinized and rehydrated. The sections were then incubated with the primary antibodies and visualized using diaminobenzidine substrates. The sections were counterstained with hematoxylin, which allows for the visualization of cell nuclei. Images were obtained using a microscope (LS30; Leam Solution, Seoul, Republic of Korea).
## 4.7. In Situ Hybridization
ISH was performed using formalin-fixed paraffin-embedded tissue samples. RNAscope 2.5 HD Assay (322300, Biotechne, Minneapolis, MN, USA) or RNAscope 2.5 HD Duplex Detection Kit (322436, bio-techne, Minneapolis, MN, USA) was used to visualize RNA expression in the tissue, in accordance with the manufacturer’s instructions. The following probes were used to perform the RNAscope assay: Mm-Vim cat# 457961, Mm-Emr1 cat# 317969-C2, and Mm-Col1a1 cat# 319379. Images were obtained using a microscope (LS30; LEAM Solution, Seoul, Republic of Korea).
## 4.8. Measurement of Transcriptional Activity
Luciferase assays were performed to determine the transcriptional activity of PPAR transcription factors in the NRK49F cells. Briefly, NRK49F cells were transfected with the PPRE-X3-TK-LUC plasmid (0.1 µg) with PPARα, PPARβ/δ, or PPARγ expression vectors (0.01 µg) using Lipofectamine 3000 reagent (Invitrogen, Carlsbad, CA, USA.). The cells were further treated with MHY2013 or WY14643 (a known PPARα agonist), GW501516 (a known PPARβ/δ agonist), and rosiglitazone (a known PPARγ agonist). The luciferase activity was measured using a One-Glo Luciferase Assay System (Promega, Madison, WI, USA). After adding the luciferase substrate, the luminescence was measured using a luminescence plate reader (Berthold Technologies GmbH & Co., Bad Wildbad, Germany). Luciferase assays were performed to determine the transcriptional activity of NF-κB in the NRK52E cells. The cells were transfected with the NF-κB promoter-LUC plasmid, and the luciferase activity was measured using a One-Glo Luciferase Assay System and a luminescence plate reader.
## 4.9. Immunofluorescence
Immunofluorescence was performed to visualize protein expression in the cells. The cells were fixed in $4\%$ formaldehyde for 10 min, washed thrice with ice-cold PBS, and exposed to $0.25\%$ Triton-X 100 in PBS for 10 min for permeabilization. To prevent non-specific binding of antibodies, the cells were blocked using a solution containing $1\%$ BSA and $0.1\%$ Tween 20 in PBS at room temperature for 30 min. Next, the cells were incubated overnight with anti-αSMA antibody, which had been diluted in the blocking buffer at 4 °C. After washing off any unbound antibodies with PBS, the cells were incubated with a secondary antibody conjugated with a fluorescent tag for 1 h in the dark. The cells counterstained with Hoechst 33258 in PBS for 1 min to visualize the nuclei. The images were captured using a fluorescence microscope (LS30).
## 4.10. Quantification and Statistical Analysis
Student’s t-test was used to analyze the differences between the two groups, and an analysis of variance was used to analyze intergroup differences. The level of statistical significance was set at $p \leq 0.05.$ The software used for the analyses was GraphPad Prism version 5 (GraphPad Software Inc., San Diego, CA, USA). Image calculations were performed using the ImageJ software (National Institutes of Health, Bethesda, MD, USA).
## 5. Conclusions
In conclusion, we investigated the anti-fibrotic and anti-inflammatory roles of the PPAR pan agonist MHY2013 using in vitro and in vivo kidney fibrosis models. When administered to the FA-induced mouse kidney fibrosis model, MHY2013 effectively reduced fibrosis development and inflammatory responses in the kidney. The anti-fibrotic and anti-inflammatory mechanisms of MHY2013 were further demonstrated using NRK49F kidney fibroblasts and NRK52E kidney epithelial cells. MHY2013 directly reduced TGF-β-induced ECM production in fibroblasts mainly through PPARγ activation, whereas MHY2013 suppressed LPS-induced pro-inflammatory responses in TECs mainly through PPARβ activation. Taken together, our results suggest that the administration of the PPAR pan agonist effectively prevented renal fibrosis in both in vitro and in vivo models of kidney fibrosis, implicating the therapeutic potential of PPAR agonists against chronic kidney diseases (Figure 7). |
# Untargeted Lipidomics of Erythrocytes under Simulated Microgravity Conditions
## Abstract
Lipidomics and metabolomics are nowadays widely used to provide promising insights into the pathophysiology of cellular stress disorders. Our study expands, with the use of a hyphenated ion mobility mass spectrometric platform, the understanding of the cellular processes and stress due to microgravity. By lipid profiling of human erythrocytes, we annotated complex lipids such as oxidized phosphocholines, phosphocholines bearing arachidonic in their moiety, as well as sphingomyelins and hexosyl ceramides associated with microgravity conditions. Overall, our findings give an insight into the molecular alterations and identify erythrocyte lipidomics signatures associated with microgravity conditions. If the present results are confirmed in future studies, they may help to develop suitable treatments for astronauts after return to Earth.
## 1. Introduction
The term microgravity in general refers to existing residual accelerations. When gravitation is the only force acting on an object, then it results is in free fall and hence it will experience microgravity [1]. Weightlessness is the state in which a body having a certain weight is balanced by another force or remains in free fall without feeling the effects of the atmosphere, equivalent to the situation faced by an astronaut aboard a spaceship. The effects of microgravity on human physiology have been studied extensively since the time of Yuri Gagarin (in 1961) who experienced the first man-on-board orbital flight, revealing profound implications for human health [2]. Despite the great interest and commitment of the scientific community, the mechanisms by which microgravity exerts its effects on the human body are not entirely clear.
Acute changes in normal physiology are typically seen in astronauts as a response and adaptation to abnormal environments. Such peculiar alterations require the attention of doctors and scientists [3]. In addition to alterations at the genetic level [4], microgravity experienced by space travellers also induces profound alterations at the cellular level. These alterations occurring at the cellular level are reflected in a series of pathological conditions such as a reduction of bone density, muscle atrophy, endocrine disorders, cognitive disorders and cardiovascular disfunctions, body fluid and electrolyte reduction, motion sickness, immune inhibition, and anaemia [5]. For these reasons, ground-based experiments simulating factors of spaceflight conditions are needed.
Microgravity is studied in several scientific and technological fields with the aim to highlight processes that on Earth are masked by the effects of the high gravitational field. Furthermore, the study of physiological processes in microgravity conditions allows the identification of the molecular mechanisms involved in different pathologies [6].
Roughly 350 people have experienced spaceflight in the past four decades, making it difficult to develop higher levels of clinical evidence to evaluate the effectiveness of space medicine interventions [7]. This great limitation, and the importance of studying the alterations at the cellular level that affect astronauts has led various research groups to study and build instruments capable of simulating space gravitational conditions on Earth.
The most employed methods to simulate microgravity are random positioning machines (RPM) and clinostats [8]. By controlled simultaneous rotating of the two axes, the clinostat cancels the cumulative gravity vector at the centre of the device, producing an environment with an average of 10−3 g. This is accomplished by the rotation of a chamber at the centre of the device to disperse the gravity vector uniformly within a spherical volume at a constant angular speed [9].
Bioactive lipid molecules known as signalling molecules, such as fatty acid, eicosanoids, diacylglycerol, phosphatidic acid, lysophosphatidic acid, ceramide, sphingosine, sphingosine-1-phosphate, phosphatidylinositol-3 phosphate, and cholesterol, are involved in the activation or regulation of different signalling pathways leading to apoptosis. Furthermore, alterations in the lipid composition determine membrane rigidity and fluidity, and play a crucial role in membrane organization, dynamics, and function [10]. Because of their biological role, lipids have been the subject of an intense area of research since the 1960s, which unfortunately was held back due to limited instrument platforms. Nowadays, lipidomics is considered an emerging science of fundamental importance for clarifying the biochemical pathways involved in several pathologies or cellular stress adaptations [11]. Advances in mass spectrometry (MS) and data processing, as well as the incorporation of soft ionization techniques, as ESI-MS2 method, has revolutionized the use of mass spectrometry, ushering this analytical tool in the field of lipidomics [12]. The lipidomics study can be applied as untargeted and targeted approaches, each with its own advantages and limitations [13]. Untargeted lipidomics focuses on the analysis of all detectable metabolites in a sample, including unknown chemicals, while targeted lipidomics is the measurement of defined groups of metabolites. While the strength of the targeted approach is validating one or more hypotheses, untargeted lipidomics allows for the discovery of new compounds that have led to a number of breakthroughs in understanding human disease risks [14]. Untargeted analyses can be performed with or without the addition of internal rules. When internal standards are added to samples, the method can provide pseudoconcentration results for particular metabolites or for metabolites with similar physicochemical properties (e.g., lipids). While these results are not truly quantitative, they may be accurate enough for case/control comparisons [15].
Despite the great relevance of the topic, currently few studies have been carried out to investigate the behaviour of lipids in erythrocyte samples cultured under simulated gravity conditions. In 2009, Ivanova et al. investigated blood samples from Russian cosmonauts by observing significant changes in the phospholipids class [16]. An increase in the percentage of phosphatidylcholine may be clearly associated with the increase in membrane rigidity. On the other hand, changes in the physicochemical properties of the plasma membrane of erythrocytes (microviscosity and permeability) can influence the efficiency of oxygen transfer, the state of the haemoglobin, and changes in the conformation of hematoporphyrin. Furthermore, changes in the in erythrocyte structure through an ultrastructural morphological analysis can be assessed by atomic force microscopy [17]. However, the study conducted by Ivanova’s team reported data deriving mainly from studies carried out after the end of a space flight, while only few data are related to changes that occur during a space flight. Moreover, lipid and phospholipid compositions of erythrocyte membranes were assayed by thin layer chromatography followed by densitometric measurement of stained dots. This technique provides information on the entire lipid class, but hardly allows the recognition of the specific lipid compounds. Since no data are reported on this subject, we decided to exploit the potential offered by chromatographic and mass spectrometry innovations to better understand the lipid modifications suffered by erythrocytes during simulated microgravity conditions. For this reason, with the aim to better understand which metabolic and/or structural changes occur in the erythrocytes subjected to low gravity, an experimental analysis of the erythrocytes’ lipid profile and their morphology under normal- and micro-g conditions was carried out following a recent investigation on the subject [18]. In detail, human erythrocytes were cultured in simulated gravity conditions, and they were collected at different times of clinorotation. For each sample, the organic phase was collected and analysed through ion mobility Q-TOF mass spectrometer (UHPLC-IM-QTOF-MS).
## 2. Results and Discussion
To investigate the erythrocytes’ lipid profile after clinorotation and to describe possible variations among the different lipid categories, samples were analysed by IM-QTOF-LC/MS and representative total ion chromatograms are shown in Figure 1.
Data processing yielded 215 and 160 features for the positive (PIA) and negative ionization analysis (NIA), respectively, which were subjected to multivariate statistical analysis (MVA). Chemical composition analysis indicated that the lipid fraction was composed of lipids from the following classes: free fatty acid (FA), lysophosphatidylcholines (LysoPC), phosphatidylcholines (PC), phosphatidylethanolamines (PE), sphingomyelins (SM), ceramides (Cer), and ether-linked oxidized phosphatidylcholine (EtherOxPC). Initially, to study sample distribution, to detect outliers, and to highlight differences or common features, a PCA was performed. The unsupervised analysis of both PIA and NIA features did not indicate any sample clustering correlated to clinorotation as shown in Figure 2.
However, the arrangement of the samples in the multivariate space appeared to be influenced by the time factor. Thus, to further limit the time factor influence, for each time point, we performed a PLS-DA. The validation parameters of the PIA and NIA models built for the samples collected at 6, 9, and 24 h are reported in the caption of the resulting plots (Figure 3).
To identify metabolites that can discriminate for the two classes of samples (clinorotated vs. control samples), an OPLS-DA model of the IM-QTOF-LC/MS data was performed for each time point and for both polarities of acquisition. The OPLS-DA score plots are reported in Figure 4. In Table 1, we reported the discriminant metabolites between two classes and selected based on VIP value.
Using MS/MS fragmentation data and consulting the Metlin and Lipidomics libraries, we were able to tentatively identify the most discriminant metabolites as reported in Table 1.
Astronauts, after their return from space missions, manifest significant haematological alterations. Since the earliest space missions, symptoms such as structural alterations of red blood cells [18], anaemia [19], thrombocytopenia, [20,21], 10–$17\%$ reduction in plasma volume, and haemolysis [22] were reported. For these reasons, concern about the effects of space flight on haematological processes has been increasing. Several scientific studies allowed different theories to be proposed that may explain the alterations in the size and number of erythrocytes [23,24]. Recently, Trudel et al. showed in astronauts a degradation and $54\%$ reduction in red blood cells [25]. Different factors can lead a human cell to programmed death, such as changes to lipid signal activity [26].
Human cells determine the characteristics of the plasma membrane bilayer by tightly controlling lipid composition and recruiting cytosolic proteins involved in structural functions or signal transduction [27]. The cell membrane is a lipid bilayer essentially formed by phospholipids, cholesterol, and glycolipids [28]. Small variations in percentage composition and molar ratio of the different classes of phospholipids and glycolipids might induce changes in the cell membrane’s fluidity and permeability. In particular, phospholipids are the main components of cell membranes and perform important biological functions.
From our results, it appears that after 6 h of clinorotation, levels of phosphocholines were increased in human erythrocytes. In particular, PC 18:1_20:4, PC 18:0_20:4, PC18:1_18:1, and PC 18:2_18:1 were found to be upregulated. Notably, PC with the arachidonic acid in their moiety were found discriminants. In particular, the proportion of sn-2-arachidonoyl-phosphatidylcholine (20:4-PC) has been shown to be inversely correlated with the activity of protein kinase B (Akt), an important kinase which promotes cell proliferation and survival. 20:4-PC reduces cell proliferation by interfering with the S-phase cell transition and by suppressing Akt downstream signalling and the expression of cyclin, such as LY294002, which is a specific inhibitor of the phosphatidylinositol-3-kinase/Akt [29]. At 9 and 24 h, erythrocytes showed other 20:4-PC upregulated: PC 18:2_20:4, PC 18:3_20:4 and PC 16:0_20:4, and PC 15:0_20:4, 15:1_20:4 and 16:0_20:4, respectively.
With the classical techniques of liquid chromatography coupled to mass spectrometry, the annotation of lipids and thus phosphocholine fatty acid composition with a good confidence interval is difficult due to the large variety of lipid species with different regiochemistry. In our study, the use of an analytical platform such as ion mobility coupled to mass spectrometry providing the collision cross section (CCS) value allows a better and more confident annotation of each metabolite. Each CCS was compared with an internal database and against the unified collision cross section compendium available on LipidMaps [30].
Additionally, a different fatty acid composition of membrane components can result in a greater sensitivity to peroxidative stress, with a consequent increase in membrane fragility. Phosphatidylcholine species containing polyunsaturated fatty acids in their moiety, particularly arachidonate, at the sn-2 position are susceptible to free radical oxidation [31]. An example is represented by 1-palmitoyl-2-arachidonoyl-sn-glycero-3-phosphatidylcholine (PC16.0_20:4), which is a common cell membrane constituent, and circulates within cholesterol particles. At 6 h of clinorotation, erythrocytes showed an upregulation of EtherOxPC 16:0_20:4, while the respective phosphocholine, PC 16:0_20:4, was found to be not discriminant. Simulated microgravity conditions increase reactive oxygen species (ROS) production in various cell types [32]. Generally, in microgravity conditions a different management of cellular resources was observed. In fact, in G0 conditions, there is a more rapid consumption of intracellular ATP, and an increase in ATP expulsion compared to cells cultured under terrestrial gravity conditions, coupled with a reducing power [8] resulting in a more oxidant environment.
Furthermore, inflammation and oxidative stress are associated with lipid peroxidation and the formation of bioactive lipids such as oxidized phosphocholines [33]. C-reactive protein (CRP), an acute-phase protein of hepatic origin that binds to specific structures expressed on the surface of dead or dying cells, promotes phagocytosis as macrophages may bind to these PC-oxidized species. Furthermore, recent studies demonstrate an enrichment of oxidized phosphatidylcholine in apoptotic cells [34]. Indeed, CRP can selectively bind on oxidized phosphatidylcholine but not on native phosphatidylcholine. In addition, oxidized phospholipids are recognized by macrophage scavenger, implying that these innate immune responses participate in cell clearance due to their proinflammatory properties [35]. Moreover, oxidized phosphatidylcholine, specifically oxidized-1-palmitoyl-2-arachidonoyl-sn-glycero-3-phosphatidylcholine (EtherOxPC 16:0_20:4), seems to be involved in ROS production. According to the study of Rouhanizadeh et al., EtherOxPC 16:0_20:4 was able to induce vascular endothelial superoxide production [36].
On the contrary, after 9 h of clinorotation, EtherOxPC(16:0_20:4) was downregulated, while PC 16:0_20:4 was upregulated, resulting non-discriminant after 24 h. These findings can lead us to hypothesize the complex adaptive response of cells.
Interestingly, several sphingomyelins were found to be downregulated for each experimental time point. This should not be surprising considering the mechanism of sphingomyelin synthesis. Indeed, sphingomyelinases (SMases) catalyse the hydrolysis of sphingomyelin to form ceramide and phosphocholine [37].
Taken together, these findings indicate that there are probably several mechanisms underlying spatial anaemia: inhibition of 20:4 PC-mediated cell proliferation and a simultaneous increase in pro-apoptotic signals.
## 3.1. Chemicals
Analytical LC-grade methanol, chloroform, acetonitrile, 2-propanol, and ammonium acetate and formiate were purchased from Sigma Aldrich (Milan, Italy). Bi-distilled water was obtained with a MilliQ purification system (Millipore, Milan, Italy). A SPLASH® LIPIDOMIX® standard component mixture was purchased from Sigma Aldrich (Milan, Italy): PC (15:0–18:1) (d7), PE (15:0–18:1) (d7), PS (15:0–18:1) (d7), PG (15:0–18:1) (d7), PI (15:0–18:1) (d7), PA (15:0–18:1) (d7), LPC (18:1) (d7), LPC 25, LPE (18:1) (d7), Chol Ester (18:1) (d7), MG (18:1) (d7), DG (15:0–18:1) (d7), TG ((15:0–18:1) (d7)-15:0)), SM (18:1) (d9), cholesterol (d7).
## 3.2. Cell Culture
Freshly drawn blood (Rh+) from 9 healthy adults of both sexes (men and women) was used, heparin was added and preserved in citrate-phosphate-dextrose with adenine (CPDA-1). Data are the average ± SD of three independent experiments. RBCs were separated from plasma and leukocytes by washing three times with phosphate-buffered saline (127 mM NaCl, 2.7 mM KCl, 8.1 mM Na2HPO4, 1.5 mM KH2PO4, 20 mM HEPES, 1 mM MgCl2, and pH 7.4) supplemented with 5 mM glucose (PBS glucose) to obtain packed cells. This study was conducted in accordance with Good Clinical Practice guidelines and the Declaration of Helsinki. No ethical approval has been requested as human blood samples were used only to sustain in vitro cultures and patients provided written, informed consent in ASL. 1-Sassari (Azienda Sanitaria Locale. 1-Sassari) centre before entering the study.
## 3.3. Microgravity Simulation
In order to study the effects caused by microgravity on human erythrocytes, the gravity simulator 3D Random Positioning Machine (RPM, Fokker Space, Netherlands) was used at the laboratory of the Department of Biomedical Sciences, University of Sassari, Sardinia, Italy. The 3D Random Positioning Machine (RPM) is a micro-weight (‘microgravity’) simulator based on the principle of ‘gravity-vector-averaging’, built by Dutch Space. The 3D RPM is constructed from two perpendicular frames that rotate independently. This setup was used to constantly change the mean value of the gravity vector to zero. In this way, the 3D RPM provides a simulated microgravity less than 10−3 g. The dimensions of the 3D RPM are limited to 1000 × 800 × 1000 mm (length × width × height). The 3D RPM is connected to a computer, and through a specific software the mode and speed of rotation were selected. Random Walk mode with an 80 degree/s (rpm) was chosen.
The red blood cell samples were carefully deposited in 2 mL tubes together with PBS-glucose ($30\%$ haematocrit, approximately 3.4 × 109 cells) in a dedicated room at 37 °C. The control group samples were placed in the static bar at 1 g to undergo the same vibrations as the samples placed in µg conditions. Both control (1 g) and case (0 g) samples were collected after different time points (0, 6, 9, 24 h). Subsequently, the red blood cells were centrifuged and resuspended in 1 mL of lysis buffer [5 mM Na2HPO4, 1 mM EDTA (pH 8.0)] and stored at −20 °C until use for lipidomic analysis or fixed for confocal microscopic analysis.
## 3.4. Sample Preparation for UHPLC-IM-QTOF-MS Analysis
In order to investigate changes in the lipidome, analysis by UHPLC- IM-QTOF-MS requires the extraction of lipid content from cells [38]. An amount of 50 µL of human erythrocyte solution was extracted following the Folch procedure using 0.700 mL of a methanol and chloroform mixture ($\frac{2}{1}$, v/v). Samples were vortexed every 15 min up to 1 h, when 0.350 mL of chloroform and 0.150 mL of water were subsequently added. The solution thus obtained was centrifuged at 17,700 rcf for 10 min, and 0.600 mL of the organic layer was transferred into a glass vial and dried under a nitrogen stream. The dried chloroform phase was reconstituted with 50 μL of a methanol and chloroform mixture ($\frac{1}{1}$, v/v) and 75 μL isopropanol:acetonitrile:water mixture (2:1:1 v/v/v). Quality control (QC) samples were prepared taking an aliquot of 10 μL of each sample. All samples thus prepared were injected in UHPLC-IM-QTOF-MS/MS and acquired in negative ionization mode, while for positive ionization mode they were diluted in ratio 1:10.
## 3.5. UHPLC-IM-QTOF-MS/MS Analysis
The chloroform phase was analysed with a 6560-drift tube ion mobility LC-QTOF-MS coupled with an Agilent 1290 Infinity II LC system. An aliquot of 4.0 μL from each sample was injected in a Luna Omega C18, 1.6 μm, 100 mm × 2.1 mm chromatographic column (Phenomenex, Castel Maggiore (BO), Italy). The column was maintained at 50 °C at a flow rate of 0.4 mL/min. The mobile phase for positive ionization mode consisted of (A) 10 mM ammonium formate solution in $60\%$ of milliQ water and $40\%$ of acetonitrile and (B) 10 mM ammonium formate solution containing $90\%$ of isopropanol and $10\%$ of acetonitrile. In positive ionization mode, the chromatographic separation was obtained with the following gradient: initially, $80\%$ of A, then a linear decrease from $80\%$ to $50\%$ of A in 2.1 min, then at $30\%$ in 10 min. Subsequently, the mobile phase A was again decreased from $30\%$ to $1\%$ and stayed at this percentage for 1.9 min, and then was brought back to the initial conditions in 1 min. The mobile phase for the chromatographic separation in the negative ionization mode differed only for the use of 10 mM ammonium acetate instead of ammonium formate.
An Agilent jet stream technology source was operated in both positive and negative ion modes with the following parameters: gas temperature, 200 °C; gas flow (nitrogen) 10 L/min; nebulizer gas (nitrogen), 50 psig; sheath gas temperature, 300 °C; sheath gas flow, 12 L/min; capillary voltage 3500 V for positive and 3000 V for negative; nozzle voltage 0 V; fragmentor 150 V; skimmer 65 V, octapole RF 7550 V; mass range, 50−1700 m/z; capillary voltage, 3.5 kV; collision energy 20 eV in positive and 25 eV in negative mode, mass precursor per cycle = 3. High-purity nitrogen ($99.999\%$) was used as a drift gas with a trap fill time and a trap release time of 2000 and 500 µs, respectively. Before the analysis, the instrument was calibrated using an Agilent tuning solution at the mass range of m/z 50–1700. Samples were evaporated with nitrogen at the pressure of 48 mTorr and at the temperature of 375 °C, while an Agilent reference mass mix for mass re-calibration was continuously injected during the run schedule.
The Agilent MassHunter LC/MS Acquisition console (revision B.09.00) from The MassHunter suite was used for data acquisition.
## 3.6. Data Analysis
Data acquired with the Agilent 6560 DTIM Q-TOF LC-MS were pre-processed with the software MassHunter Workstation suite (Agilent Technologies, Santa Clara, CA, USA). This software (Mass Profiler 10.0) allowed us to perform mass re-calibration, DTCCSN2 re-calibration, time alignment, and deconvolution of signals, yielding a matrix containing all features present across all samples. The removal of background noise and unrelated ions was performed by a recursive feature extraction tool, yielding a matrix containing all the features present across all samples. Furthermore, to eliminate non-specific information, data matrix quality assurance was performed. This filtered matrix was then subjected to multivariate statistical analysis using SIMCA software 15.0 (Umetrics, Umeå, Sweden).
First, a principal component analysis (PCA) was carried out. This unsupervised analysis allows an observation of samples and variables distribution in the multivariate space on the basis of their similarity and dissimilarity. This was followed by partial least square-discriminant analysis (PLS-DA) with its orthogonal extension (OPLS-DA), which was used as a classificatory model to visualize and evaluate the differences between sample classes.
## 4. Conclusions
Spatial anaemia in astronauts has been noted since the earliest space missions, while the contributing mechanisms during space flight remained unclear. To investigate the molecular mechanisms that induce a reduction in the number of erythrocytes during spaceflight, we decided to analyse the lipid profile of human erythrocytes under microgravity conditions. Thanks to the advancement of hyphenated techniques and mass analysers, we were able to identify biologically active complex lipids susceptible to microgravity, allowing new possible hypotheses that explain the anaemia experienced by astronauts.
In more detail, lipidomic analysis of erythrocytes revealed a double mechanism that generates the reduction in the number of red blood cells. On one hand, there is an increase in the levels of 20:4 PC, reducing cellular proliferation. On the other hand, the increase in the levels of EtherOxPC 16:0_20:4 stimulates the immune response by attracting the C-reactive protein and macrophages and induces an increase in ROS production [33,34]. ROS increase caused by microgravity inevitably induces mitochondrial damage and dysfunction as indicated by the accumulation of HexCer lipid species in clinorotated erythrocytes. This accumulation acts as a pro-apoptotic signal condemning the erythrocytes to death.
In this study, we reported a set of lipid discriminants or potential biomarkers linked to microgravity exposure with the aim to explore in the future specific lipid pathways and form the foundation in the development of novel therapeutics in hopes of reducing the effects of space flights. However, further studies are needed to accurately measure lipids in erythrocytes samples to better understand the clinical effects of microgravity. |
# Heat-Killed Enterococcus faecalis Inhibit FL83B Hepatic Lipid Accumulation and High Fat Diet-Induced Fatty Liver Damage in Rats by Activating Lipolysis through the Regulation the AMPK Signaling Pathway
## Abstract
Continuous consumption of high-calorie meals causes lipid accumulation in the liver and liver damage, leading to non-alcoholic fatty liver disease (NAFLD). A case study of the hepatic lipid accumulation model is needed to identify the mechanisms underlying lipid metabolism in the liver. In this study, the prevention mechanism of lipid accumulation in the liver of *Enterococcus faecalis* 2001 (EF-2001) was extended using FL83B cells (FL83Bs) and high-fat diet (HFD)-induced hepatic steatosis. EF-2001 treatment inhibited the oleic acid (OA) lipid accumulation in FL83B liver cells. Furthermore, we performed lipid reduction analysis to confirm the underlying mechanism of lipolysis. The results showed that EF-2001 downregulated proteins and upregulated AMP-activated protein kinase (AMPK) phosphorylation in the sterol regulatory element-binding protein 1c (SREBP-1c) and AMPK signaling pathways, respectively. The effect of EF-2001 on OA-induced hepatic lipid accumulation in FL83Bs enhanced the phosphorylation of acetyl-CoA carboxylase and reduced the levels of lipid accumulation proteins SREBP-1c and fatty acid synthase. EF-2001 treatment increased the levels of adipose triglyceride lipase and monoacylglycerol during lipase enzyme activation, which, when increased, contributed to increased liver lipolysis. In conclusion, EF-2001 inhibits OA-induced FL83B hepatic lipid accumulation and HFD-induced hepatic steatosis in rats through the AMPK signaling pathway.
## 1. Introduction
As the percentage of high-calorie diets increases and the diet composition of modern society changes [1], overweight and obesity population rates are increasing not only in Korea but also globally [2,3]. Continuous intake of a high-fat diet induces lipid accumulation rather than energy consumption in the body [4]. Excess energy is stored in the form of body fat, which eventually affects lipid metabolism [5]. Regulation of lipid metabolism homeostasis is essential for maintaining the lipid balance in the body [6]. Diabetes, obesity, fatty liver disease, and cardiovascular disease are caused by impaired lipid metabolism [7]. As a result, current research is primarily focused on studying the mechanism underlying lipid metabolism to effectively prevent and improve associated disorders [8,9].
Non-alcoholic fatty liver disease (NAFLD) is associated with obesity and type 2 diabetes dyslipidemia and is used to determine the degree of metabolic syndrome in the liver [10,11]. The lipid accumulation model of liver cells induced with oleic acid (OA) is widely used to obtain baseline data for fatty liver research, and includes FL83B, HepG2, and huH7 cells [12,13,14,15]. OA-induced intracellular lipid accumulation proceeds via the activation of lipid synthesis pathways, such as the sterol regulatory element-binding protein 1c (SREBP-1c) and peroxisome proliferator-activated receptor (PPAR)-γ pathways [16,17]. It also reduces lipolysis processes, such as the activity of AMP-activated protein kinase (AMPK) and lipase. In particular, studies on lipid accumulation in the liver through OA induction in FL83B cells (FL83Bs) and control of the lipid decomposition pathway have primarily focused on the AMPK signaling pathway and lipase enzyme activity [18,19].
AMPK is a vital sensor that uses AMP to generate energy when the energy in the body is depleted. It plays an important role in lipid and carbohydrate metabolism in the liver and is a major factor in the recovery from obesity and diabetes [20,21,22]. In contrast, AMPK decreases lipid accumulation by regulating PPAR-α expression [23].
AMPK activity can induce acetyl-CoA carboxylase (ACC) phosphorylation and decrease ACC activity to suppress lipid biosynthesis [24]. Thus, phosphorylation of AMPK not only maintains energy balance but also inhibits the formation of triglycerides (TGs) to reduce lipid accumulation in the liver. In addition, sirt 1 plays a role in regulating AMPK activity to enhance AMPK phosphorylation in adipocytes and hepatocytes [25].
Hepatic lipid synthesis is performed by the transcription and translation of genes including SREBP-1 and fatty acid synthase (FAS) [26,27]. Hepatocytes activate adipose triglyceride lipase (ATGL), hormone-sensitive lipase (HSL), and monoacylglycerol (MGL) to decompose TGs and form glycerol and free fatty acids during the citric acid cycle for energy production [28,29]. Decomposed free fatty acids stimulate macrophages in the liver to cause an inflammatory reaction and activated macrophages release inflammatory mediators to induce insulin resistance in liver cells [30,31].
Enterococcus faecalis promotes intestinal microbiota balance, alleviates metabolic syndrome, and modulates immunity, among other functions [32]. E. faecalis is also effective in treating hyperlipidemia, obesity, and fatty liver disease [33]. Probiotic strains of E. faecalis have been identified though isolation from fecal samples of healthy individuals [34]. It has been demonstrated that E. faecalis is not only beneficial when alive but is also beneficial when dead [35]. Recently, the genome sequence of EF-2001 was revealed, and it was found to significantly inhibit depression by enhancing pre-frontal local myelination [36]. EF-2001 has been shown to have beneficial effects on human health. These include radioprotective, antitumor, anti-inflammatory, anti-atopic dermatitis, and muscle atrophy prevention [37,38,39,40]. In animal models of prostatic hyperplasia, EF-2001 was also found to be effective [41]. Previous studies have reported that certain products utilizing bacteria, such as *Lactobacillus plantarum* NCU116, *Lactobacillus acidophilus* NX2-6, and other Lactobacillus strains that overexpress bile salt hydrolase, can inhibit hepatic accumulation of lipids [42]. Several other studies have reported the inhibitory effects of bacterial products on lipid accumulation, including products utilizing bacteria, such as Lactobacillus sakei ADM14, L. brevis OPK-3, and L. plantarum LMT1-48 [43,44,45,46]. Recently, we demonstrated that administrating the EF-2001 exhibits an anti-obesity effect in high-fat diet (HFD)-induced rats. Our results showed that the intake of EF-2001 significantly prevented HFD-induced obesity in rats by inhibiting the C/EBP-α and PPAR-γ in the insulin signaling pathway, thus reducing lipid accumulation [47]. Another study reported that heat treatment of E. faecalis FK-23 could ameliorate HFD-induced obesity in mice. The inhibitory effect of FK-23 on hepatic steatosis in HFD-fed mice was induced by the prevention of fat accumulation in the liver through modulation of the activities of genes involved in hepatic fatty acid oxidation [48]. Mishra and Ghosh reported on the synergistic effect of the probiotic E. faecalis AG5 on HFD-induced obesity and the role of propionic acid (PA) in the induction of apoptosis in 3T3-L1 pre-adipocytes [33]. AG5 was found to reduce adipocyte hypertrophy and fatty acid accumulation. This study revealed low PPARγ activity inhibiting 5-LOX, which may be related to adipose apoptosis, and that 5-LOX inhibition increased caspase activity. This is associated with the initiation of cell death [33]. Fan et al. reported that heat-killed E. faecalis improved the abnormal hepatic lipid mechanism in diet-induced obese (DIO) mice by reducing triglyceride (TG) accumulation [49]. This suggests that administrating the EF-2001 may be effective in attenuating hepatic steatosis, as atherogenic dyslipidaemia has been found to be associated with hepatic steatosis, after adjusting for obesity, physical activity, and hyperglycemia [50].
In this study, the effect of heat-killed E. faecalis, EF-2001 on liver lipid accumulation in HFD-induced rats was investigated, and the effects of lipase enzyme activity and AMPK signaling pathways were investigated to provide a new theoretical basis for the treatment of liver lipid metabolic disorders.
## 2.1. EF-2001 Intake Effectively Prevents Fatty Liver Tissue and Liver Damage in HFD-Induced Rats
To establish HFD-induced hepatic steatosis, male rats were divided into SD or HFD groups. Rats were orally administered refined water or EF-2001 in water at each dose per day, as scheduled. HFD groups were subcategorized into three groups (only refined water, 3 mg/kg, or 30 mg/kg EF-2001 in water) to evaluate the effects of EF-2001 on fatty liver-induced rats. We investigated the effects of EF-2001 intake on HFD-induced elevation in non-alcoholic fatty liver disease (NAFLD). Rats fed the HFD weighed significantly higher than rats fed the SD. In the HFD group, brightened and enlarged livers with fat accumulation were observed. Among the HFD groups, the appearance of the liver with accumulated bright-toned fat in the EF-2001 group rats was similar to that of the HFD group (Figure 1A), but the size of the liver was similar to that of the SD group rats. Both groups of HFD rats were administered 3 mg/kg or 30 mg/kg EF-2001 and demonstrated a reduction in liver weight (Figure 1B).
Both glutamic oxaloacetic transaminase (GOT) and glutamic pyruvic transaminase (GPT) levels were significantly increased by the HFD. EF-2001 recovered GOT and GPT levels in both the 3 mg/kg and 30 mg/kg groups, and the levels of GOT and GPT were significantly reduced in both EF-2001 administration groups due to liver damage caused by high-fat diet induction (Figure 1C,D). These results show that EF-2001 intake downregulated HFD-induced fatty liver damage.
## 2.2. Effect of EF-2001 on Oleic Acid-Induced Hepatic Lipid Accumulation in FL83Bs
We measured the hepatic lipid accumulation with or without EF-2001 in FL83Bs to investigate how EF-2001 contributes to lipid accumulation in oleic acid (OA)-induced FL83Bs. The effects of EF-2001 on OA-induced hepatic lipid accumulation in FL83Bs were examined by ORO staining. FL83Bs were pretreated with OA (0.5 mM) in serum-free medium for 48 h and then treated with EF-2001 (0, 25, 50, 100 or 250 μg/mL) for 24 h. OA-induced cells showed significantly increased lipid accumulation in the FL83Bs. However, treatment with EF-2001 (0, 25, 50, 100 or 250 μg/mL) significantly decreased OA-induced lipid accumulation in FL83Bs (Figure 2).
## 2.3. Effect of EF-2001 on Neutral Lipid Droplet of Oleic Acid-Induced FL83Bs Hepatic Lipid Accumulation
We conducted confocal microscopy in OA-induced FL83Bs to investigate how lipogenesis and lipolysis occur during lipid accumulation in EF-2001 (0, 25, 50, 100, or 250 μg/mL). After 48 h of OA induction, lipid synthesis increased in the control group. In the EF-2001-treated group, intracellular neutral lipid droplets decreased for up to 24 h (Figure 3). Therefore, we confirmed that EF-2001 inhibited intracellular lipid accumulation.
## 2.4. Effects of EF-2001 on the Expression of Lipase Enzyme Protein in FL83Bs
After OA induction, ATGL and MGL expressions were observed within 24 h of EF-2001 treatment in FL83Bs. ATGL, an early lipolytic enzyme protein, showed increased expression in EF-2001-treated FL83Bs in a dose-dependent manner. EF-2001 also increased the expression of MGL, a late signal of lipolysis, in a dose-dependent manner in FL83Bs (Figure 4). Additionally, it was found to EF-2001 contributed to lipase activation in OA-induced hepatic lipid accumulation in FL83Bs.
## 2.5. Effects of EF-2001 on the Expression of AMPK and SREBP Signaling Pathway
To identify the mechanism of lipolysis induced by EF-2001, the effects of EF-2001 on AMPK and SREBP signaling pathways were investigated. We compared AMPK signaling pathway-related proteins (AMPK and ACC) and lipid synthesis-related proteins (SREBP-1C and FAS) in OA-induced FL83B hepatocytes. The treatment of EF-2001 significantly increased the expression of phosphorylated AMPK and ACC in a dose-dependent manner (Figure 5B,C). EF-2001 treatment decreased SREBP-1C and FAS expressions in a dose-dependent manner (Figure 5D,E). Thus, these results showed that EF-2001 treatment of hepatic lipid decomposition in lipogenesis of OA-induced FL83Bs was due to activation of the AMPK signaling pathway.
## 2.6. Effects of EF-2001 on AMPK Targeted Signaling Pathway
We conducted experiments to determine the relationship between the expression of lipid-related biomarkers and treatment with EF-2001 due to AMPK signal changes using activators (AICAR) and inhibitors (compound C) of AMPK as AMPK targets. In addition, we observed that AMPK phosphorylation was increased in FL83Bs treated with AICAR. However, FL83Bs treated with AICAR and EF-2001 showed increased ACC phosphorylation and ATGL expression (Figure 6). Interestingly, co-treatment of the AMPK inhibitors, compound C and EF-2001, with FL83Bs resulted in the phosphorylation of AMPK and ACC and the expression of a lipase protein, ATGL. We observed a significant difference in the degree of inhibition of SREBP-1C expression in the EF-2001-treated group (Figure 7).
## 2.7. Effects of EF-2001 AMPK Signaling Pathway on HFD Induced Fatty Liver
Finally, we investigated the effect of EF-2001 on the protein expression level of the AMPK signaling pathway in the liver tissues of SD- and HFD-fed rats. During hepatic lipid accumulation, p-AMPK was up-regulated in the EF-2001 group. In addition, experiments were conducted during lipogenesis to investigate how EF-2001 treatment of the expression levels of lipolysis-related proteins such as p-AMPK, p-ACC, and ATGL (Figure 8A). In the HFD group, AMPK phosphorylation was increased by EF-2001 treatment (Figure 8B). However, ACC phosphorylation did not changed (Figure 8C). Moreover, oral administration of EF-2001 (30 mg/kg) to the HFD group effectively decreased the protein expression level of SREBP-1C to a level lower than that in the untreated EF-2001 HFD group (Figure 8D). In addition, ATGL protein expression increased in the 3 mg/kg and 30 mg/kg EF-2001-treated groups (Figure 8E). However, there was no significant difference in the MGL expression in the HFD group (Figure 8F).
## 3. Discussion
In our previous studies, we reported the effect of downregulation of total cholesterol (such as TG and low-density lipoprotein (LDL)-cholesterol levels), leading to an increased potential of NAFLD in HFD-induced rats by EF-2001 [47]. Downregulation of LDL levels may be an important strategy for the prevention of NAFLD. Since oral administration of EF-2001 suppressed LDL levels, we hypothesized that EF-2001 would have an effect on NAFLD. Therefore, follow-up experiments were conducted to determine the effects and molecular mechanisms underlying NAFLD.
We demonstrated that the oral administration of EF-2001 lowered GOT and GPT levels at both doses (3 mg/kg and 30 mg/kg EF-2001). In addition, both doses reduced the liver weight in HFD-fed rats (Figure 1). Our results indicate that EF-2001 administration decreases liver damage by reducing the physical size of the HFD-induced fatty liver and lowering the levels of enzymes, such as GOT and GPT, released in the blood following liver damage.
FL83Bs are a normal component of liver tissue, and lipid accumulation in FL83Bs plays an important role in the mechanism of lipogenesis, lipolysis, and the onset of NAFLD [51]. We examined the effects of EF-2001 on lipid accumulation in FL83B hepatocytes stained with ORO, and confirmed that treatment with EF-2001 inhibited hepatic lipid accumulation (Figure 2). Furthermore, we observed a reduction in lipid droplets following EF-2001 treatment in FL83B hepatocytes (Figure 3).
To confirm the anti-lipid accumulation effect of EF-2001, we analyzed the molecular mechanisms by which EF-2001 inhibits lipid accumulation in FL83Bs. Several studies have noted the importance of lipolytic enzymes such as ATGL and MGL in the regulation of hepatic lipid accumulation [52,53]. Our results also showed that ATGL and MGL levels increased in a dose-dependent manner upon EF-2001 treatment, proving that EF-2001 contributes to an increase in lipolytic enzyme expression in FL83B hepatocytes (Figure 4).
Hence, we hypothesized that EF-2001 could reduce lipid synthesis and increase lipolysis by activating the AMPK pathway to inhibit the development of fatty liver cells. EF-2001 inhibited the expression of SREBP-1C signaling pathway unit proteins, such as SREBP-1C and FAS, which mediate the lipid accumulation process (Figure 5). We found that EF-2001 can significantly enhance AMPK phosphorylation but can also enhance ACC phosphorylation to inhibit the synthesis of fatty acid chains (Figure 5).
AMPK is an energy regulator that assists in regulating glucose and lipid metabolism to maintain the cellular energy balance [54]. Recent studies have also reported a relationship between the AMPK signaling pathway and lipolysis [55]. AMPK activation induces lipolysis in hepatocytes. It also plays a crucial role in lipolysis progression [56,57]. The SREBP-1C pathway is downregulated by phosphorylated AMPK [55]. ACC-1 is responsible for synthesizing fatty acids and can be controlled by inhibitory phosphorylation by AMPK [58]. Confirming the relationship between AMPK and EF-2001, recovered or increased ACC and AMPK phosphorylation in hepatocytes treated with compound C, an AMPK inhibitor, or AICAR, an AMPK activator (Figure 6 and Figure 7). This is related to the inhibition of the AMPK signaling pathway, which is a key factor in lipogenesis. Consequently, EF-2001 induced the phosphorylation of AMPK and many kinds of lipases and inhibited the expression of the SREBP-1C signaling pathway in HFD-induced obese rats (Figure 8). Therefore, we suggest that EF-2001 inhibits fatty liver cell development, and it can cause the inhibition of NAFLD through AMPK phosphorylation.
In addition to the metabolites produced by specific members of the microbial community, there are also metabolites that are consumed or transformed by bacteria outside the microbial community, making gut microbial metabolism a highly complex process. Although the composition of the microbial community determines the metabolism of microorganisms in the intestine, the substrates available to the microbial community is the most important factor since the metabolites produced from specific substrates reflect gut microbial metabolism [59].
When food is consumed, certain components containing choline groups in the food are metabolized by GM in the intestine and produce GM-derived products such as trimethylamine (TMA), short-chain fatty acids (SCFAs), and trimethylamine N-oxide (TMAO). Metabolites, such as TMA and TMAO, have been identified as the causative agents of metabolic diseases in animal models and human clinical studies. Recent research has revealed that GM and its metabolites play an important role in the development and progression of cardiovascular and metabolic diseases [60,61,62,63,64]. Therefore, in the process of developing drugs for obesity and metabolic diseases, the metabolic pathways in which these GM-derived metabolites are synthesized can be considered as more important as they can become new therapeutic target sites. Currently, our results do not identify the bacteria capable of modulating the host TMA/TMAO and this aspect needs to be analyzed in future studies.
In some studies, the relationship between Enterococcus and liver injury has been described. Ray et al. reported that the ratio of E. faecalis associated with cytolysin was increased in the feces of patients with alcoholic fatty liver, while Lang et al. reported that the ratio of E. faecalis associated with cytolysin increased in non-alcoholic fatty liver patients, although this finding is under debate [65,66]. In a study by Tan, nonalcoholic fatty liver was inhibited by reducing Roseburia, Intestinibacter, and Enterococcus [67]. There are various other claims; therefore, more research is required. Since the gut microbiome changes dramatically even with a high-fat diet, we are currently conducting a clinical trial to analyze the effects of fecal E. faecalis on obesity and metabolic diseases.
Intestinal microbes have been implicated in the pathogenesis of gastrointestinal diseases and metabolic syndromes, such as obesity. Microbes can also produce metabolites, genetic products, and exhibit pathogenic potential that can negatively affect the host [68]. Therefore, it is important to investigate the potential negative effects of consuming heat-treated dead cells of Enterococcus in future studies.
In order to calculate the balance of benefits and harms caused by the gut microbiome from the host’s perspective, a comprehensive analysis of the distribution, diversity, species composition, and metabolites of the microbiome must be performed. For example, the production of SCFA and vitamins by the gut microbiome has positive effects on energy supply and nutrition, but if this process is out of the normal range, it can lead to disease. Although there are no studies on E. faecalis in the intestine with a high-fat diet, we are currently conducting clinical trials, and we plan to analyze the distribution of E. faecalis in the feces, changes in structural function, and altered function.
We recently discovered that E. faecalis EF-2001 inhibits toll-like receptor (TLR) signaling via anti-inflammatory effects [39]. This suggests the possibility of suppressing metabolic syndromes such as obesity and NAFLD. When the composition of the gut microbiome changes, the uptake of TLR4 ligands (e.g., LPS) and TLR9 ligands (e.g., bacterial DNA) is increased and delivered to the liver through the portal vein. Therefore, blocking TLR signaling in the liver could suppress the expression of metabolic syndromes such as obesity, NAFLD, and NASH [69]. Microbial-derived SCFA (acetate, butyrate, and propionate) may be beneficial to the host as sources of carbon and energy. In fact, many studies on the beneficial effects of SCFA on obesity, appetite, and inflammation in the colon have been published [70,71,72]. The structure of heat killed E. faecalis or the secreted substances that modulate the activity of the lipid pathway were not identified in this study. However, these findings provide a starting point for further research on the potential association between heat-killed E. faecalis and metabolic diseases such as obesity and fatty liver.
We concluded that EF-2001 directly lowered hepatic lipid accumulation through the regulation of AMPK signaling. In conclusion, our study suggests that EF-2001 may be a promising candidate for reducing various diseases, including liver damage in obese individuals, by decreasing liver lipid accumulation. However, further research is needed to identify the specific components of EF-2001 that modulate lipogenesis and lipolysis mechanisms to gain a better understanding of its potential therapeutic effects.
As next-generation sequencing became common after the 2000s, metagenomic research has become increasingly popular. This analysis method has shown that various factors, such as diet, race, age, antibiotics, stress, psychological factors, maternal health, birth methods such as natural childbirth, environmental factors, and exercise affect the distribution of intestinal microbes [73]. However, research on each of these factors is still incomplete, and more studies are needed to fully understand their impact on the gut microbiome.
## 4.1. Preparation of Heat-Killed Enterococcus faecalis (EF-2001)
EF-2001, originating from human feces, is a merchantable parabiotic purified from Bereum Co., Ltd. (Wonju, Republic of Korea) and supplied as a heat-killed, dried powder. Prior to being heat-killed, dried EF-2001 contained 7.5 × 1012 units per gram.
## 4.2. Chemical Reagent
5-aminoimidazole-4-carboxamide-1-β-D-ribofuranoside (AICAR, an AMPK activator) and compound C (an AMPK inhibitor) were purchased from Sigma-Aldrich (St. Louis, MO, USA).
## 4.3. Animal Experiments
Twenty-four male Sprague–Dawley rats (3-week-old) with an initial body weight of 30–40 g were purchased from Orient Bio Tech Laboratories (Gyeonggi, Republic of Korea), and, used in the experiments. The rats were acclimatized for one week, and food intake measurements were initiated. The rats were divided into four groups ($$n = 6$$/group) according to their diet. For experimental procedures in HFD-induced obese rats, the rats in the standard diet (SD) group were fed a commercial diet (5L79, Lab Diet Inc., St. Louis, MO, USA) and those in the HFD group were fed an HFD diet (D12492, Research Diets, Inc., New Brunswick, NJ, USA) for six weeks. Rats were subcategorized into three groups: water only, 3 mg/kg EF-2001, and 30 mg/kg EF-2001 in water. Water and food were provided ad libitum, all times. Rats in the HFD group were orally administered pure water or EF-2001 (3 or 30 mg/kg) in water once daily. Gavage was continued for 6 weeks. All experimental procedures were approved by the Institutional Animal Care and Use Committee of Yonsei University and were performed in accordance with approved guidelines (YWCI-202102-003-01).
## 4.4. Serological Analysis
Blood serum was sampled at six weeks by heart puncture under ether anesthesia using a sterilized vacutainer tube. Serum samples were analyzed to determine the activities of hepatoenzymes, including alanine aminotransferase (ALT) and aspartate transaminase (AST), using ALT and AST detection kits purchased from Asan Pharmaceutical (Seoul, Republic of Korea). The kits were used in accordance with the manufacturer’s instructions.
## 4.5. Cell Culture and Induced Fatty Liver Cells
FL83Bs, purchased from the American Type Culture Collection, were maintained in F12K medium, supplemented with $10\%$ fetal bovine serum, $1\%$ penicillin, and $1\%$ streptomycin (Sigma-Aldrich). Cells were cultured at 37 °C in an incubator with $5\%$ CO2. FL83Bs were seeded in a complete medium for 24 h and then incubated with OA (0.5 mM) for 48 h to induce lipid accumulation. The cells were treated with or without EF-2001 (0, 25, 50, 100 or 250 μg/mL) for 24 h to analyze the experimental results.
## 4.6. Oil Red O Staining of FL83B Hepatocyte
Lipid accumulation was determined by Oil red O (ORO) staining. EF-2001 was treated with differentiation induction medium at doses of 0, 25, 50, 100, and 250 μg/mL at $100\%$ cell confluence, followed by the induction of lipid accumulation in FL83Bs. FL83Bs were washed with phosphate-buffered saline (PBS), fixed with $3.7\%$ formaldehyde (Junsei Chemical, Tokyo, Japan) diluted in PBS, and stained with $60\%$ ORO diluted in distilled water. Once the stain was eluted with $100\%$ isopropanol, lipid accumulation was quantified at 490 nm wavelength using a microplate reader (Molecular Devices, San Jose, CA, USA). The results are presented in the graphs. The percentage of ORO staining was relative to that of the untreated control cells, representing the percentage of stained intracellular lipid droplets.
## 4.7. Western Blot Analysis
FL83Bs and liver tissues of HFD-induced rats were treated with EF-2001, and each protein was added to lysis buffer (iNtRON Biotechnology Inc., Sungnam, Republic of Korea) at the appropriate stage of hepatic lipid accumulation. After sonication, proteins were quantified and tested using the Bradford assay (Bio-Rad, Hercules, CA, USA) for Western blotting. The sodium dodecyl sulfate–polyacrylamide gel electrophoresis ratio was determined based on the molecular weight of the protein. Electrophoresis was performed at 100 V for approximately 2 h. Antibody treatment was performed with primary antibodies (SREBP-1C, P-AMPK, AMPK, FAS, P-ACC, ACC, ATGL, P-HSL, MGL, CD36, and β-actin) at a rate of 1:2500 overnight at 4 °C. The membrane was washed three times with tris-buffered saline solution containing Tween 20 for 10 min, and then, secondary antibodies were added at a ratio of 1:5000 for 2 h at room temperature (RT). The transferred protein band on the polyvinylidene difluoride membrane was visualized using an LAS 4000 system (GE Healthcare, Little Chalfont, UK) by inducing an enhanced chemiluminescence reaction. Antibody treatment was performed using primary antibodies, and the signal intensity was quantified using ImageJ software (NIH).
## 4.8. Confocal Microscopy
For confocal microscopy, FL83B hepatocytes were cultured in a 3 cm plate of cover glass (Mattek Corp, Lauda-Königshofen, Baden, Germany), differentiated, and treated with a dose of EF-2001. To facilitate observation of the nucleus, cells were cultured for 10 min by immobilizing paraformaldehyde at RT with fluorescent 4′,6-diamidino-2-phenylindole diluted in PBS. To visualize triglycerol, the cells were incubated with fluorescent BODIPY $\frac{493}{503}$ dye GFP (Thermo Fisher Scientific, Waltham, MA, USA) diluted in PBS for 30 min at RT with paraformaldehyde fixation. GFP expression was visualized using a LSM710 confocal microscope (Carl Zeiss, Oberkochen, Germany).
## 4.9. Statistical Analysis
The experimental results are expressed as mean ± standard error (SE). Analysis of variance and paired or unpaired t-tests were performed for statistical analysis, as appropriate. Statistical p-value < 0.05 was considered statistically significant. All experiments were performed at least three times.
## 5. Conclusions
In this study, we identified the mechanism of EF-2001 in hepatic lipid accumulation in OA-induced FL83Bs. EF-2001 inhibited hepatic lipid accumulation in OA-induced FL83Bs, notably regulating lipogenesis by activating the AMPK signaling pathway in FL83B lipid accumulation. EF-2001 reduced the phosphorylation of downstream signals, such as SREBP-1C and FAS, through the SREBP-1C signaling pathway. However, EF-2001 induced the activation of lipase enzymes, such as ATGL and MGL. In addition, AMPK phosphorylation was increased by EF-2001 in AICAR and compound C treatments that targeted AMPK. Based on our results, we found that EF-2001 targeted the AMPK signaling pathway during FL83B hepatic lipid accumulation (Figure 9). We suggest that EF-2001 regulate HFD-induced abnormal hepatic lipid accumulation in the liver. Therefore, EF-2001 may prevent NAFLD associated with hepatic lipid accumulation and could potentially be used as a basis for the treatment of these diseases. |
# Compositional Alteration of Gut Microbiota in Psoriasis Treated with IL-23 and IL-17 Inhibitors
## Abstract
Alterations in the gut microbiota composition and their associated metabolic dysfunction exist in psoriasis. However, the impact of biologics on shaping gut microbiota is not well known. This study aimed to determine the association of gut microorganisms and microbiome-encoded metabolic pathways with the treatment in patients with psoriasis. A total of 48 patients with psoriasis, including 30 cases who received an IL-23 inhibitor (guselkumab) and 18 cases who received an IL-17 inhibitor (secukinumab or ixekizumab) were recruited. Longitudinal profiles of the gut microbiome were conducted by using 16S rRNA gene sequencing. The gut microbial compositions dynamically changed in psoriatic patients during a 24-week treatment. The relative abundance of individual taxa altered differently between patients receiving the IL-23 inhibitor and those receiving the IL-17 inhibitor. Functional prediction of the gut microbiome revealed microbial genes related to metabolism involving the biosynthesis of antibiotics and amino acids were differentially enriched between responders and non-responders receiving IL-17 inhibitors, as the abundance of the taurine and hypotaurine pathway was found to be augmented in responders treated with the IL-23 inhibitor. Our analyses showed a longitudinal shift in the gut microbiota in psoriatic patients after treatment. These taxonomic signatures and functional alterations of the gut microbiome could serve as potential biomarkers for the response to biologics treatment in psoriasis.
## 1. Introduction
Psoriasis is an inflammatory skin disease that is associated with many other medical conditions, and affects adults and children worldwide [1]. Overall prevalence ranges from $0.1\%$ in east Asia to $1.5\%$ in western Europe, and is highest in high-income countries [2,3]. Most patients with psoriasis have some detriment to their quality of life attributable to the disease, and many feel a substantial, negative effect on their psychosocial wellbeing. It has been regarded that psoriasis involves the interplay between predisposing genetic and environmental (e.g., infection and antibiotics treatment) factors [1,4,5,6]. Studies have shown that skin and the gut microbiome play a role in modulating the development of chronic plaque psoriasis [7]. Recent evidence revealed a combined increase in Corynebacterium, Propionibacterium, Staphylococcus, and *Streptococcus in* psoriatic plaque sites [7,8]. Gut microbiota is known to play a critical role in the regulation of metabolism, the immune system, and intestine permeability [9]. A disturbed intestinal microbiome was shown to be involved in a number of autoimmune diseases including type 1 diabetes, rheumatoid arthritis, multiple sclerosis, celiac disease, and inflammatory bowel disease (IBD) [10,11]. In psoriasis, similar evidence demonstrated gut dysbiosis with lower diversity and altered relative abundance for certain bacteria [9,12]. Several studies have found the relative abundance of Bacteroidetes was lower and that of Firmicutes was higher in patients with psoriasis compared to healthy controls [12,13,14]. However, an inconsistent result reported by Huang et al., revealed an increased abundance of Bacteroidetes and decreased Firmicutes in psoriasis [15]. These changes in gut microbiota are considered to be crucial causes for initiating or exacerbating psoriasis in humans and animal models [16,17].
Treatment for psoriasis may change the composition of the skin and gut microbiota [18,19,20]. A change in lesional skin microbiota has been associated with a clinical response after balneotherapy [18] and phototherapy [19]. A reduced mean rate of *Staphylococcus aureus* on psoriatic plaques, reaching a nadir at weeks 16–20 after treatment, was noted in our previous research [20]. Regarding the gut microbial change after psoriasis treatment, the relative abundance of Pseudomonadaceae and Enterobacteriaceae increased significantly following secukinumab therapy, while no significant change was noted in gut microbiome composition following ustekinumab treatment [21].
In the past 20 years, findings from immunological and genetic studies have highlighted causal immunological circuits of psoriasis that converge on adaptive immune pathways involving interleukin (IL)-17 and IL-23 [1,22,23]. The suppression of psoriasis-related, proinflammatory, and Th17-associated cytokines, such as tumor necrosis factor (TNF)-α, IL-17A, and IL-23, was observed in mice fed with *Lactobacillus pentosus* [24]. The clinical significance of the interaction between microbiota and the immune system is of importance. Although guselkumab, a selective IL-23 inhibitor, and secukinumab and ixekizumab, monoclonal antibodies targeting IL-17A, were highly effective in treating psoriasis, their treatment results in IBD were not consistent. Clinical trials for biologics blocking either IL-17A or its receptor have contributed to the exacerbation of IBD [25,26]. This raised the possibility that blockade of IL-17 could interfere with the microbiota composition and homeostasis in the intestine that might predispose susceptible individuals to develop IBD [27,28]. Moreover, in a phase 2 trial, guselkumab demonstrated a greater efficacy than a placebo in patients with Crohn’s disease [29]. These findings indicated a sophisticated interaction between gut microbiota composition and biologic therapies. Yet, how gut microbiota in psoriasis react to the IL-17 and IL-23 blockers has scarcely been investigated. Therefore, this study aimed to investigate the dynamic alteration of gut microbiota in psoriasis patients before and after receiving IL-17 and IL-23 antagonists.
## 2.1. Patient Demographic and Characteristics
A total of one hundred and ninety-two fecal samples were obtained from 48 patients with 30 cases receiving the IL-23 inhibitor (guselkumab) (mean age 45.2 years) and 18 cases receiving IL-17 inhibitors (secukinumab and ixekizumab) (mean age 52.8 years). There was no significant difference in gender, weight, psoriatic arthritis, baseline PASI score, and baseline CRP level between the two groups. Patients treated with an IL-17 inhibitor were older than patients treated with an IL-23 inhibitor (Table 1).
The mean PASI scores decreased at weeks 4, 12, and 24 after either IL-23 or IL-17 inhibitor therapy. All these changes from baseline were significant (Figure 1A). In addition, the CRP level was significantly reduced after 12 weeks and 24 weeks of treatment (Figure 1B). Moreover, we found recruited patients did not change their eating habits during the study.
## 2.2. Gut Microbial Diversity in Psoriasis after the Treatment with Il-23 and IL-17 Inhibitors
We studied the temporal alteration of microbial diversity in patients treated with IL-23 or IL-17 inhibitors. Calculation of the weighted-UniFrac distance matrix (β diversity) displayed a significantly altered distance in microbial community structures among samples from patients receiving an IL-23 or IL-17 inhibitor during 24-week treatment, while no significant difference in the α diversity was observed among the groups (Figure 2A). Moreover, Bray–Curtis distance was used to measure β diversity at week 0 and 24 among the responders (R) and non-responders (NR) (Figure 2B). The results showed that β diversity of gut microbiota in the responders to the IL-23 inhibitor was significantly higher than that in non-responders both at baseline and week 24 ($p \leq 0.05$), while there was no significant difference in β diversity between responders and non-responders treated with IL-17 inhibitors.
## 2.3. Altered Composition of Gut Microbiota in Psoriatic Patients after Treatment with IL-23 and IL-17 Inhibitors
We then sought the most relevant taxa whose abundance altered after the treatment (week 24) to explore the effect of biologics on the composition of gut microbiota. In patients treated with the IL-23 inhibitor, we identified five taxa whose levels were significantly different from the baseline (Figure 3). The relative abundance of Roseburia, Lachnoclostridium, Bacteroides vulgatus, Anaerostipes, and Escherichia–Shigella increased over the time course of the treatment. In patients treated with IL-17 inhibitors, levels of *Bacteroides stercoris* and *Parabacteroides merdae* were significantly increased at week 24, while those of Blautia and Roseburia were significantly reduced (Figure 3).
## 2.4. Changes in Relative Abundance of Gut Bacteria between Responders and Non-Responders
Furthermore, we assessed the association between the therapeutic outcome and changes in relative abundance of individual taxa from the baseline to 24 weeks post-treatment. We found that among patients treated with the IL-23 inhibitor for 24 weeks, the relative abundance of Lachnospiraceae and Romboutsia significantly decreased from the baseline in the responders compared to non-responders (Figure 4). Meanwhile, the relative abundance of Fusicatenibacter in patients treated with IL-17 inhibitors for 24 weeks significantly increased compared to non-responders, whereas that of Lachnospiraceae NK4A136 and Roseburia significantly decreased (Figure 4).
## 2.5. Functional Prediction of Gut Microbiome after Treatment with IL-23 and IL-17 Inhibitors
Considering the pathways related to metabolism, we found a number of pathway modules associated with lipid metabolism, inositol phosphate metabolism, and glutathione metabolism enriched in patients treated with the IL-23 inhibitor at week 24. In contrast, bacterial genes assigned to energy metabolism, arginine biosynthesis, cysteine and methionine metabolism, fructose and mannose metabolism, and carbapenem biosynthesis were less abundant. In patients treated with IL-17 inhibitors, the abundance of pathway modules associated with indole alkaloid biosynthesis increased, while that with lysine biosynthesis decreased (Table 2).
In addition, we investigated alterations in microbial functions at week 24 from the baseline between responders and non-responders. Among patients treated with the IL-23 inhibitor, the pathway of taurine and hypotaurine metabolism was enriched in the responders compared to non-responders (Table 3). Among patients treated with IL-17 inhibitors, 13 metabolism pathways were significantly enriched and 3 decreased in responders after 24 weeks of treatment compared with non-responders (Table 3). The pathways involved in amino acids metabolism, biosynthesis of antibiotics, and carbohydrate metabolism were differentially enriched from the baseline after the treatment with IL-17 inhibitors.
## 3. Discussion
In the present study, we analyzed the gut microbial diversities and taxonomies in patients with psoriasis at weeks 4, 12, and 24 after the treatment with IL-23 or IL-17 inhibitors. This is the first study to demonstrate a significant increase in β diversity of gut microbial communities and altered abundance of certain bacteria in patients receiving the IL-23 inhibitor for 24 weeks. In addition, we identified microbial taxa and functional pathways associated with the therapeutic options and treatment responses.
Changes in gut microbiota composition due to therapeutic agents and their influence on clinical response have been reported in patients with inflammatory bowel disease (IBD) [30]. Common types of gut microbiota change after biologics treatment encompassed an increased abundance of short-chain fatty acids (SCFAs)-producing bacteria, which are considered beneficial commensal bacteria [30]. An improvement in intestinal dysbiosis was reported with an increment in the abundance of SCFAs-producing bacteria such as Anaerostipes, Blautia, and Roseburia from IBD patients after receiving infliximab [31]. Moreover, similar findings were also demonstrated in IBD patients receiving ustekinumab [32]. In this study, we found that the relative abundance of Anaerostipes and Roseburia increased in patients after IL-23 inhibitor treatment, which may increase the production of SCFAs and consequently restore the immunomodulatory function and intestinal epithelial barrier [33,34]. Conversely, the abundance of Blautia and Roseburia was reduced in those receiving IL-17 inhibitors. One previous study investigating the impact of secukinumab on gut microbial composition [21] showed a reduction in the abundance of the SCFAs-producing bacteria Firmicutes, consistent with our findings.
The *Bacteroides genus* constitutes $30\%$ of the total colonic bacteria and *Bacteroides vulgatus* is one of the most commonly encountered Bacteroides species in the human gut [35]. The role of B. vulgatus in modulating the immune system has been investigated in animal experiments. Supplementation with B. vulgatus attenuated symptoms of colitis in mice and decreased the expression of TNF-α, IL-1β, and IL-6 in the colon [36]. Moreover, suppression of the systemic and intestinal immune response was observed in mice gavaged with *Bacteroides vulgatus* [37,38]. The present study demonstrated that the relative abundance of *Bacteroides vulgatus* increased after anti-IL-23 inhibitor treatment, which might further imply the beneficial effect of gut immunomodulation by the IL-23 inhibitor in psoriasis.
The gut is considered to be a major immune organ, with gut-associated lymphoid tissue (GALT) being the most complex immune compartment [39]. It is well known that changes in the gut microbial composition may promote both health and disease [40]. Strong evidence has indicated that intestinal dysbiosis is clinically relevant to psoriasis [41]. The importance of the gut–skin axis in the pathogenesis of psoriasis has recently been documented in humans, as well as in animal models. [ 42]. In imiquimod-induced psoriasis-like mice, gut microbiota promoted intestinal and cutaneous inflammations by enhancing the IL-23/IL-17 axis [42,43]. In addition, a gut microbial genus, Romboutsia, increased in mice with imiquimod-induced psoriasis [43], suggesting that IL-23/IL-17-axis-related psoriasis may be associated with levels of gut Romboutsia. Intriguingly, our study revealed that the abundance of Romboutsia significantly decreased at week 24 in the responders to the IL-23 inhibitor when compared with non-responders. However, there was no significant difference in the gut Romboutsia level between responders and non-responders treated with IL-17 inhibitors. Based on these findings, we speculate that blocking IL-23 may ameliorate Romboutsia-mediated psoriasis by improving IL-23/IL-17-axis-related skin inflammation.
At the genus level, an enriched Lachnospiraceae NK4A136 group was detected in patients with ankylosing spondylitis [44] and IBD [45]. Recently, a study on the gut microbiome demonstrated an increase in the abundance of gut Lactobacillaceae in psoriatic patients [13]. Our results further revealed that the abundance of Lachnospiraceae NK4A136 at week 24 significantly decreased in responders to IL-17 inhibitors compared to non-responders. It has been shown that the Lachnospiraceae NK4A136 group is correlated with elevated levels of intestinal IL-17 and IL-6 in mice with diabetes mellitus, resulting in intestinal inflammation [46]. Thus, we hypothesize that responders to IL-17 inhibitors might benefit from reduction in the gut Lachnospiraceae NK4A136 group, which likely contributes to declined skin inflammation. Further investigation should be conducted to address the causal relationship of these findings.
In our study, sixteen KEGG pathways were found to be significantly enriched in responders to IL-17 inhibitors, such as the biosynthesis of amino acids, energy metabolism, and biosynthesis of antibiotics including vancomycin, validamycin, and novobiocin. Previously, dramatic changes in glucose metabolism, amino acid metabolism, and energy metabolism have been shown in psoriasis [47,48]. Metabolic regulation of cell proliferation and apoptosis was thought to be critical for dysregulated keratinocyte hyperproliferation in psoriasis [49,50]. Altogether, these findings suggest that altered gut-microbiota-mediated biosynthesis of amino acids and energy metabolism may also contribute to specific phenotypes in patients with psoriasis, such as uncontrolled keratinocyte hyperproliferation. It was reported that treatment with broad-spectrum antibiotics in mice with imiquimod-induced psoriasis reduced proinflammatory IL-17-producing T cells and skin thickness [16,42]. Moreover, Actinobacteria, isolated from the gut of freshwater fish, exhibited antimicrobial activities by producing antibiotic compounds [51]. Our data showed that gut microbiome-encoded metabolic KEGG pathways enriched in the responders to IL-17 inhibitors were concentrated in the biosynthesis of antibiotics. According to these findings, we suggest that IL-17 inhibitors may partially improve psoriasis-related skin inflammation by enhancing gut-microbiota-mediated biosynthesis of antibiotics.
In addition, reduction in the abundance of the taurine and hypotaurine metabolism pathway in patients with severe psoriasis has been observed in one recent study [52]. Our results demonstrated that the abundance of the taurine and hypotaurine metabolic pathway was significantly enhanced in the responders to the IL-23 inhibitor, as compared with that in non-responders. Taurine, an abundant amino acid in leukocytes, is found in high concentrations in inflammatory lesions and tissues exposed to oxidative stress. [ 53]. Collectively, these findings and our data imply that a shift in gut bacterial composition due to the IL-23 inhibitor could lead to significant changes in taurine metabolism, which may correlate with an improvement in the inflammatory status in patients with psoriasis.
Our results should be considered in the context of several limitations. First, sample sizes were limited, and larger cohorts should be assessed in future studies. Second, due to the relatively limited resolution of the 16S rRNA sequencing technique [54], shotgun metagenomic sequencing methods are needed to identify specific bacterial strains in psoriasis. Third, based on the gut-microbiota-mediated metabolic pathways related to the response to the IL-23 inhibitor or IL-17 inhibitors identified in psoriatic patients, it is necessary to explore their key regulatory targets. Finally, we did not investigate inflammatory markers collected from the peripheral blood, gut, and stool so we could not explain the association of inflammatory changes with the microbial composition.
In summary, treatments with IL-23 and IL-17 inhibitors were associated with distinct shifts in gut microbial composition in patients with psoriasis. Significant differences in the relative abundance of bacteria taxa between the responders and non-responders suggested that IL-23 and IL-17 inhibitors may functionally interact with gut microbiota to reduce cutaneous inflammation. Moreover, we demonstrated the association between the treatment response and gut microbial function, which might serve as potential biomarkers in the treatment response.
## 4.1. Study Design and Patients
This prospective study enrolled forty-eight patients with psoriasis, including 30 cases treated with an IL-23 inhibitor (guselkumab) and 18 cases with IL-17 inhibitors (ixekizumab or secukinumab) in the Chang Gung Memorial Hospital (Taoyuan, Taiwan) from September 2020 to March 2022. None of the included cases had taken systemic antibiotics, systemic immunosuppressant agents, oral corticosteroids, and probiotics one month before each sample collection other than guselkumab, ixekizumab, or secukinumab. The anti-IL-23 medication group received guselkumab 100 mg at week 0, 4, and every 8 weeks thereafter. The anti-IL-17 medication group received either ixekizumab 160 mg at week 0, 80 mg at week 2, 4, 6, 8, 10, and 12, and 80 mg every 4 weeks thereafter, or secukinumab 300 mg at week 0, 1, 2, 3, and 4, and every 4 weeks thereafter. The demographics and clinical data of the patients, including their age, gender, weight, and psoriatic arthritis (PsA), were collected at baseline. Psoriasis Area and Severity Index (PASI) score and serum C-reactive protein (CRP) level were collected at week 0, 4, 12, and 24. Responders were defined as those having a PASI improvement of ≥$90\%$ after 24 weeks of treatment and non-responders as having a PASI improvement of <$90\%$. Information about food intake during the study was collected at week 0, 12, and 24 through a food-frequency questionnaire (FFQ) [55].
## 4.2. DNA Isolation and 16S rRNA Gene Sequencing
Stool specimens were collected using the Longsee Fecalpro Kit (Longsee Medical Technology Co., Guangzhou, China) at baseline and 4 weeks, 12 weeks, and 24 weeks after treatment. As described previously [56], DNA was isolated by using the QIAamp PowerFecal Pro DNA Kit for Feces (Qiagen, Germantown, MD, USA) following the manufacturer’s instructions. Around 0.25 g of the sample in the Bead Tube was added with 750 μL of PowerBead Solution and 60 μL of Solution C1, which was then heated at 65 °C for 10 min. The mixture was vortexed by using a PowerLyser Homogenizer at 1000 RPM for 10 min. After the steps of cell lysis, removal of contaminating matters, washing and eluting with DNA-free, PCR-grade water, DNA was extracted. The concentrations and qualities of the extracted DNA were measured by using Qubit 4 Fluorometer (Thermo Fisher Scientific, Waltham, MA, USA).
The variable regions 3 and 4 (V3–V4) of 16S rRNA gene were PCR (polymerase chain reaction)-amplified by using the primer set (the Illumina V3 forward 5′-CCTACGGGNGGCWGCAG-3′ and V4 reverse 5′-GACTACHVGGGTATCTAATCC-3′) [57]. The Illumina sequencing adapters ligated to the purified amplicons by a second-stage PCR using the TruSeq DNA LT Sample Preparation Kit (Illumina, San Diego, CA, USA) were performed to construct a library. Purified libraries were quantified, normalized, pooled, and applied for cluster generation and sequencing on a MiSeq instrument (Illumina).
## 4.3. Sequencing Data Processing and Species Annotation
Paired-end reads were processed by using DADA2 [58] to filter out noisy sequences, correct errors in marginal sequences, remove chimeric sequences, and eliminate singletons to infer amplicon sequence variants (ASVs). Bacterial taxonomy was assigned by applying a pre-fitted QIIME2 classifier built with the Scikit-lean package [59] based on the information collected from the SILVA database [60]. Arrangement of multiple sequences were performed by the PyNAST software v.1.2 [61] for assessment of the phylogenetic relationship of various ASVs, and a phylogenetic tree was constructed with the FastTree 2.1.0 [62].
## 4.4. Microbial Gene Function Prediction
Functional composition of metagenomes was predicted from 16S rRNA data by the Tax4Fun2 software [63]. To predict functional profile of the microbial community, the taxonomic abundance transformed from the SILVA-based 16S rRNA and normalized by the 16S rRNA copy number acquired from the NCBI annotations were applied to incorporate the precomputed functional profiles of KEGG pathways [63]. KEGG analysis was only focused on “Metabolism” pathways.
## 4.5. Statistical Analysis
Demographic and clinical characteristics were presented as n (%) for categorical variables and mean ± standard deviation (SD) or median with range for continuous variables. For estimating alpha diversity, species richness was evaluated by inverse Simpson’s index. Beta diversity was analyzed by Bray–Curtis or unweighted-UniFrac distance matrix. In order to investigate the association of treatment effect and bacteria in the fecal specimens, we further identified differentially abundant bacterial taxa among groups. Statistically significant biomarkers were analysed by the non-parametric Kruskal–Wallis test, Wilcoxon rank-sum test, and linear discriminant analysis (LDA) to identify differentially abundant taxa. Change in relative abundance after treatment from the baseline was compared between responders and non-responders by fitting a linear mixed model, measured on a continuous scale to identify longitudinal biomarkers. All statistical tests are two-tailed, and a p-value less than 0.05 was considered statistically significant. |
# The Landscape of Lipid Metabolism in Lung Cancer: The Role of Structural Profiling
## Abstract
The aim of this study was to explore the relationship between lipids with different structural features and lung cancer (LC) risk and identify prospective biomarkers of LC. Univariate and multivariate analysis methods were used to screen for differential lipids, and two machine learning methods were used to define combined lipid biomarkers. A lipid score (LS) based on lipid biomarkers was calculated, and a mediation analysis was performed. A total of 605 lipid species spanning 20 individual lipid classes were identified in the plasma lipidome. Higher carbon atoms with dihydroceramide (DCER), phosphatidylethanolamine (PE), and phosphoinositols (PI) presented a significant negative correlation with LC. Point estimates revealed the inverse associated with LC for the n-3 PUFA score. Ten lipids were identified as markers with an area under the curve (AUC) value of 0.947 ($95\%$, CI: 0.879–0.989). In this study, we summarized the potential relationship between lipid molecules with different structural features and LC risk, identified a panel of LC biomarkers, and demonstrated that the n-3 PUFA of the acyl chain of lipids was a protective factor for LC.
## 1. Introduction
Lung cancer (LC) is the second most common cancer and the leading cause of cancer-related death worldwide [1]. Despite several initiatives to control tobacco, there has been no significant downward trend in incidences of LC. There are no or obvious specific symptoms in the early stages of LC, and most people are diagnosed in the mid- to late- stages. This creates a substantial burden for healthcare systems and severely compromises the quality of life of people with LC.
LC has various pathogenic factors and complex pathological mechanisms, but metabolic reprogramming is one of the most important hallmarks of tumor cells. It is commonly found in the process of glucose metabolism, amino acid metabolism, and lipid metabolism, while changes in lipid metabolism have received less attention compared with other topics. Lipids are essential components of the biological membranes and structural units of cells. A comprehensive classification system organizes lipids into these eight well-defined categories: fatty acyls (FA), glycerolipids (GL), glycerophospholipids (GP), sphingolipids (SP), sterol lipids (ST), prenol lipids (PR), saccharolipids (SL), and polyketides (PK) [2]. Lipid metabolism is critical in the development of tumorigenesis [3,4]. Recently, research interests have shifted toward using omics to explore the pathophysiological processes involved in the development of LC, where lipidomics captures changes in endogenous and exogenous molecules that confer further insights [5]. The dysregulation of lipid metabolism is one of the most prominent metabolic alterations in LC which could lead to abnormal gene expression and disorders of signaling pathways [6]. Research on major lung cell types isolated from human donors illustrated the significant role of lipids in lung functions and lung development, including phosphoglycerol (PG), diacylglyceride (DAG), and triacylglyceride (TAG) [7]. In addition, research on lipidomics for LC [3,8,9] has become more common, but it merely provides clues regarding the importance of lipid classes, including phosphatidylcholine (PC), PE, and lysophosphatidylcholine (LPC), in the pathology of non-small cell lung cancer (NSCLC). FAs are at the root of these complex lipids, and their functions and features are mainly determined by their structure, which depends on the number of carbons in the chain (short, medium, long, or extra-long fatty acids) and the number of double bonds (saturated, monounsaturated, and polyunsaturated fatty acid (PUFA)) [10]. Changes in saturated and unsaturated fatty acid levels can disrupt homeostasis in vivo, enhance cellular stress, alter cell membrane dynamics, and affect the uptake and efficacy of chemotherapeutic drugs [11,12]. Due to the structural and biosynthetic complexity of lipids, the results of previous studies are varied [3,13], and the contribution of lipid structural features in the pathogenesis of LC remains unexplored.
The identification of biomarkers or novel metabolic dysregulation pathways has long been a popular field. Thousands of candidate cancer biomarkers have been identified, but only a few are currently used in clinical practice. Lipid metabolism disorders have been found to have great potential for discovering biomarkers and understanding the pathogenesis of LC.
Chronic inflammation is a precondition for the progression of cancers [14]. Although numerous studies [15,16] have reported that inflammation-driven markers, including the neutrophil-lymphocyte ratio (NLR) and the platelet lymphocyte ratio (PLR), play a key role in tumorigenesis and progression, it is unknown whether inflammatory mediators play a role in the pathogenesis of LC caused by lipid metabolism disorders.
Herein, we aim to descriptively summarize the potential relationship between lipid molecules with different structural features and LC risk, identify prospective biomarkers of LC that can be applied in clinical diagnosis and treatment, and further discuss how inflammation mediates the relationship between lipids and LC risk.
## 2.1. Study Population
As part of an ongoing hospital-based case control study, patients were recruited from the Second Affiliated Hospital of Fujian Medical University. The inclusion criteria for the cases were as follows: [1] patients with diagnosed primary LC by fiberoptic bronchoscopy or histopathologic evaluation; [2] patients who have not received chemotherapy; and [3] patients without other lung diseases or systemic diseases, such as heart, liver, kidney, cranial, or brain. The exclusion criteria included patients with a pathologic diagnosis of lung inflammation, benign lesion, or secondary LC. During the same study period, 62 age-matched (±2 years) and sex-matched cancer-free healthy controls (median age: 54 years) were randomly selected from a health examination cohort. All subjects were Han Chinese people who had lived in Fujian for at least 10 years, did not suffer from coronary atherosclerotic heart disease, cerebrovascular disease, thyroid insufficiency, diabetes, or hyperlipidemia, and were able to answer the study questions. All participants provided written and informed consent. The present study was approved by the Second Affiliated Hospital of Fujian Medical University’s Institutional Review Board with the certificate number IRB No. 2021–452.
## 2.2. Chemicals and Reagents
The internal standards were purchased from AB SCIEX (refer to Supplementary Materials File S1 for details). HPLC grade methanol, isopropanol (IPA), acetonitrile (ACN), and water were purchased from Merck (Darmstadt, Germany).
## 2.3. Sample Collection and Preparation
Approximately 10 mL of peripheral venous blood was collected from each study subject under fasting conditions, fractioned by a trained researcher according to standard protocols, and the EDTA plasma was stored at −80 °C in deep freezers. Plasma samples were processed according to the following sequential steps: [1] 225 μL methanol was added to each of the 20 μL plasma samples and then vortexed for 10 s; [2] 13 μL internal standards and 750 μL MTBE were added, vortexed for 10 s, and then stood for 30 min; [3] an additional 188 μL water was added, vortexed for 20 s and then stood for 10 min before centrifugation at 15,000 rpm at 4 °C for 15 min; [4] 700 μL supernatant was collected from each sample separately and blow dried with a nitrogen blower; [5] 100 μL reconstitution reagent ($65\%$ ACN, $30\%$ IPA, and $5\%$ H2O, v:v:v) was added to each dried sample, then vortexed for 10 s, and centrifuged at 14,000 rpm at 4 °C for 10 min; and [6] 100 μL supernatant from each tube was then transferred into vials for HPLC-MS/MS analysis. Before sequence analysis, four consecutive quality control (QC) samples were injected to assess the reproducibility of the system. QC samples were inserted every 10 samples to evaluate the stability of the system in the sequence analysis.
## 2.4. HPLC-MS/MS Analysis
Seminal plasma lipidomics were detected using a HPLC coupled with 4500 QTRAP mass spectrometry (AB SCIEX Pte. Ltd., Framingham, MA, USA). Chromatographic separations were performed on an ACQUITY UPLC BEH HILIC column (1.7 μm, 100 mm × 2.1 mm, 186003461) (Waters). Its parameters are listed in Table S1.
Quantitation was performed on a 4500 QTRAP tandem mass spectrometer coupled with an electrospray ionization (ESI) source (the conditions of the ESI are shown in Table S2). Multiple reaction monitoring (MRM) was used to quantify the compounds in the positive and negative ion modes (Table S3 sheet 1–2 present detailed information about lipids in positive and negative modes). The ion adduct form of each lipid is shown in Table S3 sheet 3.
Throughout the analysis process, the samples were placed in an autosampler and analyzed continuously in a randomized order to avoid fluctuation in the instrument’s detection signal that would affect the experimental results.
## 2.5. Data Processing and Statistical Analysis
Peak identification, peak filtering, peak alignment, and lipid identification were performed using SCIEX OS software to obtain a two-dimensional data matrix, including the mass-to-nucleus ratio, retention time, peak area, and lipid class information. The relative abundance of each lipid was measured while considering the total area of all the transformations that were analyzed. The peak area data from the individual lipids were then transformed and normalized.
Since the relationship between lipids and disease risk may vary depending on the length and unsaturation of the acyl chain [17], lipid subclasses were grouped and further analyzed based on the carbon atom and double bond number. Twenty lipid subclasses were grouped by their total carbon number and total double bond number. The total concentrations of each group in the individual lipid subclasses were log-transformed and standardized into unit variance. Odds ratios (ORs) of LC risk per log-transformed SD increase in each lipid species, based on the number of carbon atoms and double bonds, were calculated using conditional logistic regression models and visualized by bubble plots. To further reveal and clarify the potential biological mechanisms of lipid species based on FA, the n-3 and n-6 PUFA scores among the subclass of lipid profiling were calculated, and the relationships were explored. Univariate and multivariate statistical analyses were used to screen the differential lipids. Univariate statistical analyses included the t-test, fold change (FC) analysis, and volcano plots based on the first two analyses. Multivariate statistical analysis included unsupervised principal component analysis (PCA), supervised partial least squares-discriminant analysis (PLS-DA), orthogonal partial least squares-discriminant analysis (OPLS-DA), and sparse partial least squares-discriminant analysis (sPLS-DA). Then, two machine learning approaches (including the random forest algorithm and support vector machine algorithm) were used to define a combinational lipid biomarker in the plasma samples to distinguish patients with LC from healthy controls. Lipid scores (LS) were calculated by multiplying the OR of the ten biomarkers by their corresponding concentration values. LS = 0.23 × DAG (32:0) + 0.28 × DAG (34:0) + 0.27 × FFA (16:2) + 0.20 × FFA (24:1) + 0.17 × PE (O-38:5) + 0.32 × PC (40:4) + 0.16 × PS (38:6) + 0.30 × TAG (55:2/FA 18:2) + 0.26 × TAG (54:7/FA 18:1) + 0.25 × DAG (40:8). A mediation analysis [18] was performed to test whether the observed associations between LS and LC could be explained by the blood mediator using the medflex package in R. All statistical analyses were performed using R 4.1.1 software. A two-sided p-value < 0.05 was used for this study and considered statistically significant.
## 3.1. Characteristics of the Participants
After matching the sex and age of the cases and controls, a total of 124 participants were enrolled (including 62 cases and 62 controls). The characteristics of the participants with LC and the controls are shown in Table 1. As expected, the participants showed similar characteristics. To broadly explore the distribution of blood indexes between those with LC and the controls, 10 blood indexes were detected, including white blood cell count (WBC), neutrophil count (NEUT), lymphocyte count (LYMPH), monocyte count (MONO), platelet count (PLT), eosinophil count (EOS), NLR, PLR, neutrophil monocyte ratio (NMR), and lymphocyte monocyte ratio (LMR). Compared with the control group, the LC group had higher WBC, MONO, PLT, NLR, PLR, and NMR values (all $p \leq 0.05$) and lower LYMPH and LMR values ($p \leq 0.05$).
## 3.2. Lipid Profiling Grouped by Lipid Structure and Risk of LC
Representative chromatograms of QC and case and control lipids in positive and negative ion mode are shown in Figures S1–S3. The results of unsupervised PCA are shown in Figure S4. A high experiment quality with a large degree of QC clustering was observed in unsupervised PCA. Apparent differences in grouping between the LC and control groups are shown in the PCA score scatter plots for the negative (Figure S4A) and positive (Figure S4B) ionization modes. A total of 605 lipid species spanning 20 individual lipid classes were identified in the plasma lipidome from 124 subjects, which is shown in Figure S4C. A volcano plot representing the levels of the lipids that were upregulated or downregulated in patients with LC compared with the control group is shown in Figure S4D.
An overview of the relationship between 20 lipid subclasses and LC risk is described in Figure S5. Cholesteryl esters (CE), ceramides (CER), DCER, free fatty acid (FFA), hexosylceramide (HCER), PC, LPC, and lysophosphatidylethanolamine (LPE) were positively correlated with LC, while phosphates (PA), PE (O), PE (P) and phosphoserines (PS) were negatively correlated. DAG, LCER, LPG, PE, PG, PI, SM, and TAG were not significantly associated with LC. To clarify the potential biological mechanisms of each lipid with LC, the effect of the lipids based on their chemical structure was obtained using a multivariate conditional logistic regression model which is visualized in Figure 1. The ORs for the individual lipids and their FDR values are plotted in a two-dimensional graph defined by the number of carbon atoms (x-axis) and the number of double bonds (y-axis) in the acyl chain (detailed Supplemental Table S3 Sheet 4 includes the exact OR and FDR-corrected p-values). CE, CER, DCER, FFA, HCER, LPC, LPE, and PC positively related to LC risk, whereas PA, PE, PI and PS were related to lower odds of LC. Higher carbon atoms with DCER, PE, and PI presented a significant, negative correlation with LC. However, we did not find any clear correlation between the parity of the number of double bonds and the carbon atoms with LC risk. Compared with the FFA of unsaturated double bonds, the risk of saturated double bonds positively correlated with LC risk.
Moreover, the n-3 PUFA score and n-6 PUFA scores in the acyl chain were calculated, and point estimates of all lipids revealed an inverse association between LC risk and n-3 PUFA scores and the n-6/n-3 ratios obtained from all the lipid species, while there was a positive association between n-6 PUFA scores and LC risk. These results are shown in Table 2. Significant and negative associations between n-3 PUFA scores and LC risk were observed in DAG, PA, PE, PS, and TAG classes, whereas significant and positive associations were observed between n-6 PUFA scores and LC risk in FFA, LPC, LPE and TAG classes (all $p \leq 0.05$).
## 3.3. Screening for Differential Lipids and the Risk of LC
PLS-DA, OPLS-DA, and sPLS-DA were used to identify differences in lipid profiles between the LC and control groups in positive and negative modes (Figure 2A,C and Figure S6A–H). Compared with OPLS-DA and sPLS-DA, there were remarkable separations with the performance of R2 = 0.816 and Q2 = 0.647 in the positive mode and R2 = 0.908 and Q2 = 0.738 in the negative mode of PLS-DA for the LC and control groups (Figure 2B,D). After combining the FC, t-tests, and VIP from PLS-DA, 36 differential lipid species were extracted with an FC > 2.0, an FDR-corrected p-value < 0.05, and a VIP value > 1.5. The association between the 36 differential lipids and LC risk are shown in Figure S7. When applying 36 differential lipids in KEGG, the pathways of GP metabolism, glycosylphosphatidylinositol-anchor biosynthesis, and GL metabolism were detected (Figure S6I). SVM and random forest algorithms were used to further obtain significant lipid biomarkers between the LC and control groups, and the random forest algorithm had a better predictive accuracy of the biomarker model (Figure 3A,B and Figure S8). A panel of 10 lipid biomarkers was identified by the random forest algorithm and included DAG (34:0), DAG (32:0), FFA (16:2), FFA (24:1), PE (O-38:5), PC (40:4), PS (38:6), TAG (55:2/FA 18:2), TAG (54:7/FA 18:1), and DAG (40:8). Lower odds were observed between DAG (34:0), DAG (32:0), FFA (16:2), FFA (24:1), PE (O-38:5), PC (40:4), PS (38:6), TAG (55:2/FA 18:2), TAG (54:7/FA 18:1), DAG (40:8) and LC risk in the multivariate conditional logistic regression (OR = 0.23, $95\%$ CI: 0.10–0.52; OR = 0.28, $95\%$ CI: 0.14–0.55; OR = 0.20, $95\%$ CI: 0.08–0.49; OR = 0.17, $95\%$ CI: 0.07–0.40; OR = 0.27, $95\%$ CI: 0.14–0.52; OR = 0.32, $95\%$ CI: 0.18–0.55; OR = 0.16, $95\%$ CI: 0.06–0.40; OR = 0.30, $95\%$ CI: 0.16–0.55; OR = 0.26, $95\%$ CI: 0.13–0.51; and OR = 0.25, $95\%$ CI: 0.12–0.49, respectively) (Figure 3D). Notably, there was a high predictive performance of lipid biomarkers with AUC = 0.909 ($95\%$ CI: 0.813–0.984) for 5 lipid species and AUC = 0.96 ($95\%$ CI: 0.909–0.993) for 10 lipid species (Figure 3C). Moreover, an LS based on 10 lipid biomarkers was calculated, and the association between LS and LC risk was investigated (OR = 0.11, $95\%$ CI: 0.06–0.67). As shown in Figure S9, a higher LS was observed in the control group compared with the LC group and showed close and high predictive efficiency (0.96, $95\%$ CI: 0.92–1.00). We also divided the 62 patients into two groups, including 28 in the early-stage group (stages 0 and I) and 34 in the intermediate- and advanced-stage groups (stages II–IV). Similarly, we screened for nine differential lipids between the early and intermediate to late groups and then showed the association between lipid signatures and lung cancer via forest plots (Figure S10).
## 3.4. Mediation Effects for Blood Indexes between Lipid Biomarkers and LC
Figure S9 shows LS and ROC curve analyses for the LC and control groups. Next, the association between blood indexes and lipid biomarkers was investigated (Figure S11). Generally, LMR and LYMPH showed a positive correlation with lipid biomarkers and LS, whereas PLR, NLR, and PLT were negatively associated with lipid biomarkers. In addition, MONO, NEUT, and WBC were negatively correlated with some markers, including LS, TAG (55:2/FA 18:2), TAG (54:7/FA 18:1), PE (O-38:5), PC (40:4), PS (38:6), and DAG (40:8).
A mediation analysis was performed to test whether the observed associations between LS and LC could be explained by a blood mediator. As shown in Table 3, LS had a partial, indirect effect on LS through LMR, LYMPH, MONO, NLR, and PLR with a matched $95\%$ CI that excluded zero. The LMR, LYMPH, MONO, NLR, and PLR mediated proportions were $2.87\%$, $1.89\%$, $2.03\%$, $5.04\%$, and $2.95\%$, respectively.
## 4. Discussion
In the current study, we used targeted HPLC-MS/MS lipidomics and multiple statistical analyses to identify and quantify potentially differential lipid molecules associated with LC and reveal the specific relationships between different lipid structures and LC. Collectively, these findings capture the characteristic metabolic fingerprints of LC patients and elucidate the role of inflammatory mediators between lipids and LC, which adds value to recent studies on lipid metabolic reprogramming. To our knowledge, this is the first relatively comprehensive lipidomics study to explore lipid structural profiles and LC risk. In addition, this study innovatively used inflammatory indicators in blood as mediators to explore the relationships between different lipid molecules and LC.
One ST (CE), one FA (FFA), three SPs (CER, DCER, and HCER), and three GPs (LPC, LPE, and PC) were directly associated with LC risk, while the other four GPs (PA, PE, PI, and PS) were inversely associated with LC risk. The dysregulation of CE metabolism has been demonstrated in many tumor biomarker studies [19,20,21], including studies on LC [22]. A pilot study identified two lipid markers that distinguish squamous cell LC from high-risk individuals with high sensitivity, specificity, and accuracy, including CE (C 18:2) [23]. A study in Germany showed a significant accumulation of free cholesterol and cholesteryl esters within lung tumor tissue, and based on reports of elevated cholesterol levels in cancer cells, strategies to reduce cholesterol synthesis have been suggested as an anti-tumor strategy [24]. SPs are bioactive lipids that are involved in the modulation of cell survival, proliferation, and inflammatory responses, and the SphK/S1P/S1PR (S1P) pathway drives many anti-apoptotic and proliferative processes [25]. The disruption of SP metabolism has been associated with the pathogenesis of LC [6,26]. From the perspective of conversion, lipid-based multi-biomarker panels may capture information on the common etiological mechanisms of LC.
We identified 10 lipids as a fingerprint of patients with LC, and the results showed that 10 promising lipid biomarkers had an AUC range of 0.960. Changes in phospholipid metabolism significantly impact membrane structure, thereby affecting its function, altering key cellular signaling pathways (e.g., cell proliferation and survival), and promoting tumorigenesis [27]. For instance, altered PC and PE membrane content, phospholipid metabolite levels, and FA profiles are commonly recognized as indicators of carcinoma development and progression [28]. PC and PE are phospholipids (crucial components of the cell membrane), and their quantities have been altered in various malignancies [29]. PC could govern cancer cell death, and the blood levels of various PCs were significantly lower in LC patients [30]. PE participates in cell signaling and may control cellular growth and death programs. PE serum levels were higher in participants with malignant LC and decreased after surgical excision of the malignant nodules [31]. In a nested case control study conducted in China, PC and PE-O showed significantly different levels between the LC and control groups and were negatively associated with LC risk [32]. DAG plays a lipid second messenger role used to transduce signals. Few studies have discovered an association between DAG and LC; however, the induction of LC by combining DAG kinase brings etiological implications [33,34]. FFA, which acts as a substrate for cell membrane structures, has been considered an independent risk factor for cancer [35]. The association between FA and LC has been extensively described [36,37]. A previous review discussed the role of fatty acids and their lipid mediators in the apoptosis of cancer cells [38]. Inconsistent study results may be partly due to differences in cancer type, study design, population, sample size, and confounding elimination.
There are findings in our study that deserve particular attention. The association of lipids with LC risk can differ based on acyl chain length and the unsaturation degree; therefore, the lipids were further analyzed based on their carbon atom and double bond numbers. Our results partially corroborated previous observations [39] that DCER, PE, and PI with longer acyl chains were associated with a lower risk of LC. Chen et al. demonstrated that short carbon chain C2-ceramide can effectively sensitize PTX-induced senescence of H1299 cells via p21waf1/cip1- and p16ink4-independent pathways [39]. A previous study [31] suggested that long-chain PE is a potential diagnostic marker for LC. The overexpression of phosphatidylethanolamine-binding protein 4 (PEBP4) in LC regulates tumor progression, invasion, and metastatic potential, which may be partly due to an increase in PEs that act as agonists of PEBP and mediate signal transduction [40]. The relationship between FFA and LC varied at differing degrees of unsaturation. Our results, as well as previous studies [41], support the hypothesis that SFA may be significant in the etiology of LC and deterministic in its development. Saturated lipids are less susceptible to lipid peroxidation, which may protect cancer cells from lipid peroxidation-mediated cell death, and it also alters membrane dynamics and affects the uptake and efficacy of chemotherapeutics [12]. PUFA was found to be important for maintaining cellular function and internal environmental homeostasis, including signal transduction, cell growth, differentiation, and viability [37]. In particular, the levels of n-3 and n-6 PUFAs play an important role in LC risk and progression. Also, n-3 PUFAs contain docosahexaenoic acid (DHA), eicosapentaenoic acid (EPA), and alpha-linolenic acid (ALA). A previous study reported that DHA-PC and EPA-PC significantly inhibited orthotopic tumor growth and lung metastasis, via the activation of PPARγ and the downregulation of the NF-κB pathway to control tumor growth and metastasis [42]. Another study [36] used genome-wide association study (GWAS) data from a Mendelian randomization (MR) approach to demonstrate that n-3 PUFA is a causal protective factor for LC, which is similar to our results. In addition, a mouse experiment [43] showed that mice that were fed a diet rich in n-6 PUFA had significantly increased proliferation, angiogenic and pro-inflammatory markers, and decreased expression of pro-apoptotic proteins in their tumors. Nevertheless, the association between n-6 PUFA, n6/n3 ratios, and LC risk was not found in our study.
In terms of the causal association between lipids and LC, it is plausible that the mediation effects of inflammatory mediators can be partially explained. Previous studies [44,45] have discovered an association between inflammatory mediators and lipid metabolites. Qian et al. [ 46] demonstrated that a key mechanism involved in inflammatory states is GP metabolism, which raises the possibility that phospholipids may act as inflammatory mediators. The polyunsaturated alkenyl-linked fatty acids found in PE(P) and PE(O), together known as plasmalogens, are essential for the storage of precursors as inflammatory mediators, the control of membrane fluidity, and anti-oxidation [47]. LPC plays an important role in mediating inflammation [48] and endothelial cell activation [49]. In addition, systemic inflammatory biomarkers relate to the occurrence and progression of cancer, including LC [50,51,52]. A prospective UK Biobank cohort recruited 440,000 participants and assessed the associations between systemic inflammation markers and risks for 17 cancer sites and revealed that inflammation markers could serve as biomarkers of cancer [53]. In the current study, it was revealed that LMR, LYMPH, MONO, NLR, and PLR mediated $2.87\%$, $1.89\%$, $2.03\%$, $5.04\%$, and $2.95\%$, respectively, of associations between LS and LC risk. Nevertheless, future research must still determine the precise processes underlying the detected connections.
The major strengths in our current study lie in its analytical approaches and well-characterized study design. Our targeted lipidomics approach was constructed upon HPLC-MS/MS, which allowed for the explicit identification and relative quantification of plasma lipids. Two univariate methods, three multivariate methods, and two machine learning methods were used to select differential lipid molecules, and their similarities and differences were compared. To our knowledge, the present study is the first to explore the relationship between lipids, blood inflammatory markers, and LC risk, and it provides a new perspective for future studies. However, the study has several limitations. Selection bias may be present in any hospital-based case control study, but all subjects were recruited according to strict criteria, which may minimize selection bias. We also did not further explore the classification and staging of LC. Therefore, we will consider expanding the study sample, refining the subgroups, and staging of the study population in future studies.
Finally, current immunotherapies have shown remarkable effects for controlling cancer, with the PD-1/PD-L1 axis being one of the most important and well-studied checkpoint pathways in cancer immunity [54]. Immune-related adverse events were found to be closely related to the mechanism by which PD-1/PD-L1 antibodies restarted anti-cancer immune attacks in a lung cancer study by Sun et al. [ 55]. The specific role of lipids in the regulation of the PD-1/PD-L1 axis was revealed by Yang et al. [ 54]. Lipids are key metabolic switches in the immune response [54,56]. The link between lipids and cancer immunity provides an opportunity for future studies to use lipids as biomarkers to evaluate cancer immune responses.
## 5. Conclusions
In summary, the current study provides a comprehensive analysis of the plasma levels of 605 lipids. The findings of this study deepen our knowledge of the pathophysiological mechanisms of LC, highlight the importance of detailed studies on structural differences between various species of lipids in LC research, reveal relationships between different lipid subclass characteristics and LC risk, identify 10 lipid metabolites as potential novel biomarkers of LC risk, and explore associations between lipids and blood inflammatory indicators. In addition, clearly altered lipids were noted to be related to GP metabolism and alpha-linolenic acid metabolism. Our results suggest that lipid profiling may provide novel tools for the research of LC and facilitate the advancement of disease diagnosis and treatment. Blood lipids are promising LC biomarkers that may lead to new treatment strategies. |
# Gut-Microbiota Dysbiosis in Stroke-Prone Spontaneously Hypertensive Rats with Diet-Induced Steatohepatitis
## Abstract
Metabolic-dysfunction-associated fatty-liver disease (MAFLD) is the principal worldwide cause of liver disease. Individuals with nonalcoholic steatohepatitis (NASH) have a higher prevalence of small-intestinal bacterial overgrowth (SIBO). We examined gut-microbiota isolated from 12-week-old stroke-prone spontaneously hypertensive-5 rats (SHRSP5) fed on a normal diet (ND) or a high-fat- and high-cholesterol-containing diet (HFCD) and clarified the differences between their gut-microbiota. We observed that the Firmicute/Bacteroidetes (F/B) ratio in both the small intestines and the feces of the SHRSP5 rats fed HFCD increased compared to that of the SHRSP5 rats fed ND. Notably, the quantities of the 16S rRNA genes in small intestines of the SHRSP5 rats fed HFCD were significantly lower than those of the SHRSP5 rats fed ND. As in SIBO syndrome, the SHRSP5 rats fed HFCD presented with diarrhea and body-weight loss with abnormal types of bacteria in the small intestine, although the number of bacteria in the small intestine did not increase. The microbiota of the feces in the SHRSP5 rats fed HFCD was different from those in the SHRP5 rats fed ND. In conclusion, there is an association between MAFLD and gut-microbiota alteration. Gut-microbiota alteration may be a therapeutic target for MAFLD.
## 1. Introduction
The number of patients with nonalcoholic fatty liver disease (NAFLD), including nonalcoholic steatohepatitis (NASH), has increased over the years [1]. As NASH causes cirrhosis and hepatocellular carcinoma (HCC), NASH is one of the important health issues worldwide. However, an unknown mechanism is also present in the pathogenesis of the development of NASH [2,3].
Fatty liver associated with metabolic dysfunction is common [4,5,6]. Metabolic-dysfunction-associated fatty-liver disease, “MAFLD,” may be a more appropriate overarching term [4,5]. Metabolic-dysfunction-associated fatty-liver disease is the principal worldwide cause of liver disease and affects nearly a quarter of the global population [4,5]. Diagnosis of MAFLD is based on the detection of liver steatosis together with the presence of at least one of three criteria that includes overweight or obesity, type 2 diabetes mellitus, or clinical evidence of metabolic dysfunction, such as an increased waist circumference and an abnormal lipid or glycemic profile [5]. Patients with hepatic steatosis and lean/normal weight is diagnosed as MAFLD in the presence of more than two metabolic risk abnormalities of the following criteria: an increased waist circumference, hypertension, an abnormal lipid or glycemic profile [5]. Patients with NAFLD are at a substantially higher risk of fatal and non-fatal cardiovascular events [6]. NAFLD and cardiovascular disease share multiple common conditions, such as obesity, diabetes, dyslipidemia and hypertension. These diseases may also share multiple common mechanisms, such as dietary habits, smoking, lack of exercise, gut-microbial dysbiosis, and genetics [6].
It has been reported that there is an association between NAFLD/NASH and gut-microbiota [5]. Individuals with NASH have a higher prevalence of small-intestinal bacterial overgrowth (SIBO) [7]. Intestinal mucosa-barrier malfunction may also play a role in NASH [8]. Individuals with NASH have a lower percentage of Bacteroidetes (Bacteroidetes total bacteria counts) than those with simple steatosis or healthy controls [9]. Thus, intestinal bacteria and gut-microbiota dysbiosis may play an important role in the development of NAFLD and NASH [3]. It has also been reported that gut-microbiota dysbiosis is linked to hypertension [10]. The gut-microbiota influence stroke pathogenesis and treatment outcomes [11,12].
Spontaneously hypertensive rats (SHR) and stroke-prone spontaneously hypertensive rats (SHRSP) are well-established parallel lines from outbred Wistar–Kyoto (WKY) rats [13,14]. We previously demonstrated a NASH model using arteriolipidosis-prone rats (ALR; SHRSP5), which are sublines obtained by the feeding of high-fat- and high-cholesterol-containing diets (HFCD) to SHRSP rats [15]. SHRSP5 rats fed HFCD possessed NASH, abnormal lipid, lean body, hypertension, and stroke [13,14,15].
In the present study, we examined the gut-microbiota isolated from stroke-prone spontaneously hypertensive-5 rats (SHRSP5) that were fed a normal diet (ND) or HFCD at 12 weeks of age and clarified the difference between their gut-microbiota. We observed differences between the microbiota of the feces in the SHRSP5 rats fed HFCD and those in the SHRP5 rats fed ND, as well as an increase in the Firmicutes/Bacteroidetes (F/B) ratio, which is a signature of gut dysbiosis, in the microbiota from the small intestine in the SHRSP5 rats fed HFCD. Our observation partially supports the concept of “MAFLD” from the point of view of gut-microbiota dysbiosis.
## 2.1. Quantitative Analysis of 16S Ribosomal RNA Genes of Bacteria of Microbiota
Fecal-pellet DNA was isolated from the 12-week-old SHRSP5 rats fed ND or HFCD for 7 weeks [15]. As previously reported [15], in the HFCD group, pathological findings consistent with NASH were observed; however, in the ND group, only diffuse lipid droplets were seen in the hepatocytes at 12 weeks of age. As one rat died in the HFCD group, its fecal DNA could not be analyzed. At the same time, the DNA contents in the small intestines were also isolated from both groups of rats.
First, we performed real-time PCR to measure the 16S ribosomal (r)RNA genes of the bacteria in the small intestines and feces in both groups of rats (Table 1). We noticed that the quantities of the 16S rRNA genes in the small intestines of the SHRSP5 rats fed HFCD were significantly lower than those of the SHRSP5 rats fed ND ($p \leq 0.05$). However, the DNA from all samples were sufficient for the subsequent analysis.
Thus, HFCD reduced the 16S rRNA genes in the small intestines of the SHRSP5 rat, compared with ND. Interestingly, there may be an association between the reduction in bacteria and the fibrosis of the steatosis of the liver in the SHRSP5 rat fed HFCD. The effects of HFCD intake may be more important for the development of hepatic fibrosis in NASH than SIBO.
## 2.2. Next-Generation Sequencing of the V4–V5 Region of 16S rRNA Genes of Gut-Microbiota
Gut-microbiota dysbiosis is occasionally observed in patients with NASH [16]. The bacterial 16S rRNA gene has been used to define bacterial taxonomy and phylogeny. In order to understand the association between the gut-microbiota and the pathogenesis of NASH, we analyzed the V4–V5 region of the 16S rRNA from the bacteria in the small intestines and feces in the SHRSP5 rats fed ND or HFCD on the Illumina-MiSeq platform. The sequencing-read numbers are shown in Table 2. The sequence-read number ranged from 18,255 to 31,756. In the small intestines, the average sequence-read number of rats fed ND was similar to those of rats fed HFCD (28,819 ± 1944 vs. 29,567 ± 1956; no statistically significant difference). The coverage numbers were in a sequence around ~410 bp. These results indicate successful next-generation sequencing in the present study.
## 2.3. Microbiota of Small Intestine in SHRSP5 Rats Fed ND Are More Similar to Those of Small Intestine or Feces in SHRSP5 Rats Fed HFCD Than to Those of Feces in SHRSP5 Rats Fed ND
Next, we performed weighted UniFrac analyses to calculate the distances between the microbiota populations from the small intestines and feces in the SHRSP5 rats fed with a ND or HFCD [17] (Figure 1A). The microbiota of the small intestines in the SHRSP5 rats fed on ND were more similar to those of the small intestines or feces in the SHRSP5 rats fed on a HFCD than to those of the feces in the SHRSP5 rats fed with ND. The clustering analysis in the ß-diversity analysis of the microbiota populations also supported these results (Figure 1B,C).
A clear separation was observed in the principal-components analysis, clustering analysis, and ß-diversity analysis of the microbiota of the feces between the SHRSP5 rats fed on a HFCD and those fed on a ND (Figure 1A–C). Notably, the microbiota of the feces of the SHRSP5 rats fed an HFCD was different from those of the SHRSP5 rats fed an ND.
## 2.4. The Firmicutes/Bacteroidetes (F/B) Ratio Increased in the Small Intestines of SHRSP5 Rats Fed HFCD Compared to That in SHRSP5 Rats Fed ND
An increase in the Firmicutes/Bacteroidetes (F/B) ratio, caused by an expansion of Firmicutes and/or a contraction of Bacteroidetes, is considered a signature of gut dysbiosis [10]. The F/B ratio in the small intestines of the SHRSP5 rats fed an HFCD increased compared to that of the SHRSP5 rats fed an ND (Figure 2A). The F/B ratio in the feces of the SHRSP5 rats fed with the HFCD tended to increase compared to that of the SHRSP5 rats fed with the ND (Figure 2B).
The F/B ratio in the small intestines of the SHRSP5 rats fed with the HFCD was ~4.6-fold higher than that of the SHRSP5 rats fed the ND (Figure 2A). The F/B ratio in the feces of the SHRSP5 rats fed the HFCD tended to be ~1.7-fold higher than that of the SHRSP5 rats fed the ND (Figure 2B).
In both the small intestines and the feces of SHRSP5 rats fed on an HFCD, the number of both Firmicutes and Bacteroidetes decreased. In the feces of the SHRSP5 rats fed the HFCD, the number of Proteobacteria increased (Figure 3).
In the present study, among the Firmicutes, the Allobaculum decreased in the feces of the SHRSP5 rats fed with a HFCD. The Lactobacillus decreased and the *Streptococcus increased* in the small intestines of the SHRSP5 rats fed the HFCD. The *Clostridium increased* in both the small intestines and the feces of the SHRSP5 rats fed the HFCD. Of the Bacteroides, the Porphyromonadaceae decreased in feces of SHRSP5 rats fed the HFCD. Of the Proteobacteria, the Escerichia increased in both the small intestines and the feces of the SHRSP5 rats fed the HFCD.
## 3. Discussion
In the present study, we examined the gut-microbiota isolated from 12-week-old SHRSP5 rats fed a ND or a HFCD and clarified the differences between their gut-microbiota. We observed that the F/B ratio in both the small intestines and the feces of SHRSP5 rats fed the HFCD increased compared to that of the SHRSP5 rats fed the ND. Notably, the quantity of 16S rRNA genes in the small intestines of the SHRSP5 rats fed the HFCD were significantly lower than those of the SHRSP5 rats fed the ND. The microbiota of the feces of the SHRSP5 rats fed the HFCD was different from those of the SHRSP5 rats fed the ND.
Li et al. reported the ability of *Grifola frondosa* heteropolysaccharide to ameliorate NAFLD in rats fed a high-fat diet (HFD) and significantly increase the proportions of Allobaculum [18]. Increases in Allobaculum can help infant mice resist the development of obesity, according to an investigation of the intestinal microbiota in mice [19]. These reports partially support our observation that Firmicutes and Allobaculum decreased in the feces of the SHRSP5 rats fed the HFCD.
Panasevich et al. reported that soy protein is effective at preventing hepatic steatosis, and an analysis of fecal bacterial 16S rRNA revealed that soy-protein isolate intake elicited increases in Lactobacillus in obese Otsuka Long–Evans Tokushima fatty (OLETF) rats [20]. The rates of *Streptococcus belonging* to Bacilli were significantly increased in rats fed with a high-fat diet [21]. Compared with healthy subjects, NAFLD patients show an increase in the percentage of bacteria of pathogenic Streptococcus [22]. Previous studies [20,21,22] support our observations that the rates of Lactobacillus decreased and those of *Streptococcus increased* in the small intestines of the SHRSP5 rats fed the HFCD.
Individuals with NAFLD might be at increased risk of the development of Clostridioides difficile colitis [23]. Clostridioides difficile colitis can trigger changes associated with the development of NAFLD [24]. In our study, the *Clostridium also* increased in both small intestines and the feces of the SHRSP5 rats fed the HFCD.
High-fat diets result in quantitative alterations in the aerobes (Escherichia coli) in NASH rats [25]. Of the Proteobacteria, the *Escherichia increased* in both the small intestines and the feces of the SHRSP5 rats fed the HFCD. In $37.5\%$ ($\frac{12}{32}$) of the patients with NAFLD, SIBO was present, with *Escherichia coli* as the predominant bacterium [26]. A previous study also demonstrated an increase in the *Escherichia genus* among gut-microbiota in the development and progression of NASH [27,28].
The presence of SIBO decreases small-intestinal movement in NASH rats [25]. A high-fat diet did not increase the anaerobics (Lactobacilli) [25]. Bacteroides species are also anaerobic. In the present study, of the Bacteroides, Porphyromonadaceae decreased in the feces of the SHRSP5 rats fed the HFCD. The presence of SIBO and endotoxemia can result in changes in toll-like receptor (TLR)-signaling gene expression, leading to the development of NAFLD [26]. The abundance of *Bacteroidetes phylum* may be increased, decreased, or unaltered in NASH patients [28].
Thus, SIBO plays a role in the development of NASH pathogenesis [7]. Patients with NASH and those with significant liver fibrosis on liver biopsy had a significantly higher incidence of SIBO than patients without NASH and those without significant liver fibrosis, respectively [29,30]. The onset of NASH in childhood is also a significant health problem [31]. There is an association between NAFLD and SIBO in obese children [32]. SIBO has an effect on the structural and functional characteristics of the liver, resulting in higher insulin and glucose levels, higher neutrophil-to-lymphocyte ratios, and a greater prevalence of NAFLD. A meta-analysis showed a possible association between SIBO and NAFLD in children [33].
The higher the grade of liver steatosis, the higher were the circulating lipopolysaccharide (LPS)-binding protein levels and SIBO rates seen in patients with morbid obesity and NAFLD [34]. The presence of SIBO may enhance intestinal permeability and endotoxemia in NASH patients [35]. Increased endotoxemia may enhance the innate immune response, including TLR-signaling pathways, as well as leading to inflammation and fat deposition in the liver.
The symptoms related to SIBO are bloating, diarrhea, malabsorption, body-weight loss, and malnutrition [36]. SIBO is a heterogeneous syndrome characterized by an increased number and/or abnormal type of bacteria in the small intestine [36]. Notably, the SHRSP5 rats fed with the HFCD presented diarrhea and body-weight loss compared to those fed with the ND [15]; these symptoms were consistent with those of SIBO. In the SHRSP5 rats fed the HFCD, abnormal types of bacteria were observed in the small intestines, although the number of these bacteria did not increase (Table 1). We noticed that the HFCD is more important for the development of hepatic fibrosis in NASH than SIBO.
High-fat diet (HFD)-dependent differences at the phylum, class, and genus levels appear to lead to dysbiosis, characterized by an increase in the F/B ratio, and Firmicutes was the dominant class in a male Sprague-Dawley (SD) rat (7 weeks old) fed HFD with steatohepatitis [37], supporting our observation (Figure 2B). An eight-week treatment of Gegen Qinlian decoction (GGQLD), a well-known traditional Chinese herbal medicine, improved these HFD-induced change [37]. Hugan Qingzhi tablet (HQT), which is a lipid-lowering and anti-inflammatory medicinal formula, has been used to prevent and treat NAFLD and reduced the abundance of the F/B ratio in HFD-fed rats [21]. Curcumin and metformin, which have a therapeutic effect against NAFLD, reduced the F/B ratio and reverted the composition of the HFD-disrupted gut-microbiota in male Sprague–Dawley rats fed HFD [38]. Gut-microbiota can play a role in the pathogenesis of NAFLD, as dysbiosis is associated with reduced bacterial diversity, altered F/B ratio, a relative abundance of alcohol-producing bacteria, or other specific genera [39].
Major risk factors of MAFLD are overweight/obesity, central obesity, type 2 diabetes mellitus, dyslipidemia, arterial hypertension, metabolic syndrome, insulin resistance, dietary factors, lifestyle, and sarcopenia [5]. It is known that gut-microbiota, hyperuricemia, hypothyroidism, sleep apnea syndrome, polycystic ovary syndrome, polycythemia, hypopituitarism, genetic and epigenetic factors, and family history of metabolic syndrome including high blood pressure are common and uncommon risk factors of MAFLD [5].
An association between hypertension and gut-microbiota alteration has been reported [10], as has an association between stroke and gut-microbiota alteration [11,12]. An association between obese and gut-microbiota alteration has also been reported [40], although fecal microbiota transplantation did not reduce body mass index. Evidence for the role of gut-microbiota in metabolic diseases including type 2 diabetes was provided [41]. Human and animal studies indicate the association between diets and hepatic steatosis [42,43]. The association between MAFLD and gut-microbiota alteration should now be clearer given the results of the present study. Dietary factors, such as high-calorie diets with rich saturated fats and cholesterol, soft drinks high in fructose, and highly processed foods, are known to influence the severity of NAFLD. Changing gut-microbiota also does so, at least in part [44]. In the present study, HFCD had an impact on changing gut-microbiota.
We observed an association between NASH and gut-microbiota alteration in the SHRSP5 rats, which originated from the stroke-prone, spontaneously hypertensive rats (SHRSP) fed the HFCD. The recent concept of MAFLD highlights the association between fatty liver disease, hypertension, stroke, and other metabolic diseases. The results from the present study may partially support the association between MAFLD and gut-microbiota alteration. Gut-microbiota alteration may be a therapeutic target for MAFLD. The real interest is how and why the altered microbiota are related to the pathological phenotype. Studies of the associated mechanism should be performed.
The 16S rRNA gene is present in multiple copies in the genomes of bacterial pathogens [45,46]. Therefore, amplicon-sequencing of the bacterium-specific 16S rRNA gene is a useful method for investigating a broad range of bacterial species. However, it is unclear whether the amplicon-sequencing-based detection of the 16S rRNA gene is useful for determining the causative pathogen. A major problem is that the 16S rRNA gene can be amplified not only for meaningful bacteria but also for meaningless bacteria, which is one of the limitations of this study.
Another limitation of the present study is that the number of rats used was small. This was because the present study was an initial study; we will elucidate the mechanisms further in a future study. For example, further improvement of bioinformatics and their analysis, the use of the QIME2 software, which uses amplicon sequence variant (ASV) instead of operational taxonomic unit (out) [47,48,49,50,51], or a denoising step, which allows for obtaining microbial taxa with a higher confidence [52], will be needed.
## 4.1. Animals
This investigation conformed to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH publication no. 85-23, 1996). The Ethics Committee of Nihon University School of Medicine examined all research protocols involving the use of animals and approved this study (no. 11-034). SHRSP5 rats were obtained from Disease Model Cooperative Research Association (Kyoto, Japan) [13,14]. The SHRSP5 rat is a subline obtained by feeding HFCD to SHRSP rats [53,54]. These SHRSP5 rats are characterized by fat deposition in their arteries, as well as fat deposition in and fibrosis of their livers, indicating the development of diet-induced NASH [15].
## 4.2. Dietary Intervention
The ND group was fed only a stroke-prone (SP) diet. SP diet was purchased as MF from Oriental Yeast Co., Ltd., Itabashi-ku, Tokyo, Japan. The HFCD consisted of $68\%$ (w/w) SP diet, $25\%$ (w/w) palm oil, $5\%$ (w/w) cholesterol, and $2\%$ (w/w) cholic acid [15]. In 100 g of ND, there was approximately 7.9 g water, 23.1 g protein, 5.1 g fat, 5.8 g ash, 2.8 g fiber, 55.3 g soluble without asphyxiation, and 359 kcal, according to the information from Oriental Yeast (https://www.oyc.co.jp/bio/LAD-equipment/LAD/ingredient.html (accessed on 13 February 2023)). The quantities of vitamins A, D3, E, K3, B1, B2, C, B6, B12, inositol, biotin, pantothenic acid, niacin, colin, and folic acid were 1283 IU, 137 IU, 9.1 mg, 0.04 mg, 2.05 mg, 1.1 mg, 4 mg, 0.87 mg, 5.5 mg, 439 mg, 27 μg, 2.45 mg, 10.61 mg, 0.18 g, and 0.17 mg, respectively, in 100 g of ND. We expected each rat to eat ~20 g of the diet daily. Experiments were conducted at least twice for consistent observations.
## 4.3. Sample Collection
Three rats from each group were examined [15]. Their feces were collected for 16S rRNA sequencing analysis. We only gathered the top layers of the feces and performed the isolation under sterile conditions to avoid bacterial contamination. Isoflurane was used as an anesthesia method for sampling the contents of small intestines. Heart blood was collected under general anesthesia; after abdominal median incision, heart blood was collected as described elsewhere [15]. After the incision of perianal, we collected the content of small intestine for further analysis. We performed animal experiments according to the Japanese animal welfare guidelines (https://www.maff.go.jp/j/chikusan/sinko/animal_welfare.html (accessed on 13 February 2023)) at that time.
## 4.4. Quantification of 16S rRNA Genes by Real-Time PCR
The total bacterial genomic DNA was extracted using the Extrap Soil DNA Kit Plus ver.2 (Nippon Steel Corporation, Tokyo, Japan) and stored at −20 °C prior to further analysis. The DNA was used in equal amounts for further PCR analysis.
The total number of bacterial 16S rRNA genes was estimated using a TaqMan-based qPCR approach with primers Bac1055YF, Bac1392R, and Q-probe Bac 1115Probe, which were described previously [55] (Table 3).
## 4.5. Next-Generation Sequencing 16S rRNA Genes
*In* general, 16S and/or internal transcribed spacer ribosomal RNA sequencing are performed for the amplicon sequencing methods to identify and compare the flora of bacteria or fungus of collected samples [50]. This method could identify them after concentrating the original materials using the next-generation sequencing. We performed sequencing analysis of 16S rRNA genes in the present study.
The PCR with high-fidelity-DNA polymerase was used to amplify the V4–V5 region of the 16S rRNA gene with primers U515F and 926R (Table 3). Agilent 2100 bioanalyzer (Agilent technologies, Santa Clara, CA, USA) and PicoGreen dsDNA Assay Kit (Invitrogen, Carlsbad, CA, USA) were used to purify and quantify the resulting PCR amplicons. The Illumina-MiSeq platform (Illumina, San Diego, CA, USA) was used to pool the amplicons in equal amounts and implement the paired-end 2 × 250-base-pair sequencing. Finally, base-pair sequences of ~410 bp were analyzed.
## 4.6. Data Analysis
Standard bioinformatics-alignment comparison was utilized for data analysis [56]. The Quantitative Insights Into Microbial Ecology (QIIME) pipeline was employed to process the sequencing data [16]. Paired-end reads were demultiplexed according to a combination of forward and reverse indices. Additional quality filtering included exact match to sequencing primers and an average quality score of 30 or higher on each read. Prior to further analysis, each paired-end read was stitched into one contiguous read using the fast length adjustment of short reads (FLASH) software tool. Reads that could not be joined were excluded from downstream analysis. All sequences passing filters were aligned against a Silva non-redundant 16S reference database (v108) and assigned taxonomic classifications using USEARCH at a $97\%$ identity threshold. Dereplication to unique reference-sequence-based operational taxonomic units (refOTU) was performed using UCLUST at a $97\%$ clustering threshold and summarized in a refOTU table. Additional alpha-diversity measures and normalized-per-level taxonomic abundances were created using custom scripts written in R [10]. Differentially significant features at each level were identified using linear discriminant analysis (LDA), along with effect-size measurements (LEfSe) [57]. Three-dimensional principal-coordinates analysis (PCoA) plots using the tree-based UniFrac distance metric were generated through custom scripts in R and scripts from the QIIME package [16]. The OUT taxonomic classification was conducted by BLAST, searching the representative sequences set against the database using the best hit, as in previous studies [58]. Classification of bacterial taxonomy based on the end product was performed as previously described [59]. Briefly, genera were classified into more than one group if they were defined as producers of multiple metabolites. Genera that were defined as producing equol, histamine, hydrogen, and propionate constituted only a minor portion of the population and were therefore excluded from this analysis. A representative sequence from each OTU was selected according to the default parameters.
## 5. Conclusions
As in SIBO syndrome, the SHRSP5 rats fed with a HFCD presented diarrhea and body-weight loss with abnormal types of bacteria in their small intestines, although the number of these bacteria did not increase. Our results strongly support the association between MAFLD and gut-microbiota alteration. |
# Analysis of the Role of Stellate Cell VCAM-1 in NASH Models in Mice
## Abstract
Non-alcoholic fatty liver disease (NAFLD) can progress to non-alcoholic steatohepatitis (NASH), characterized by inflammation and fibrosis. Fibrosis is mediated by hepatic stellate cells (HSC) and their differentiation into activated myofibroblasts; the latter process is also promoted by inflammation. Here we studied the role of the pro-inflammatory adhesion molecule vascular cell adhesion molecule-1 (VCAM-1) in HSCs in NASH. VCAM-1 expression was upregulated in the liver upon NASH induction, and VCAM-1 was found to be present on activated HSCs. We therefore utilized HSC-specific VCAM-1-deficient and appropriate control mice to explore the role of VCAM-1 on HSCs in NASH. However, HSC-specific VCAM-1-deficient mice, as compared to control mice, did not show a difference with regards to steatosis, inflammation and fibrosis in two different models of NASH. Hence, VCAM-1 on HSCs is dispensable for NASH development and progression in mice.
## 1. Introduction
With the continuously expanding obesity pandemic, the prevalence of nonalcoholic fatty liver disease (NAFLD) is constantly increasing [1]. NAFLD is highly associated with insulin resistance, metabolic syndrome and type 2 diabetes and comprises a spectrum of liver pathologies [2,3,4]. Specifically, apart from benign hepatic steatosis, which is characterized by elevated lipid accumulation in hepatocytes, the disease can progress to non-alcoholic steatohepatitis (NASH), characterized by hepatocyte damage, inflammation and fibrosis. NASH affects approximately 1 in 5 NAFLD patients and poses a significantly higher risk for development of cirrhosis and hepatocellular carcinoma (HCC) [5,6]. Since FDA approved treatments for NASH are missing, novel therapeutic strategies are of urgent need [7].
Inflammation is considered a major instigator for the progression of simple steatosis to NASH, with infiltrating monocyte-derived macrophages and activated Kupffer cells playing a cardinal role in this process, via the secretion of inflammatory cytokines and chemokines, such as IL-1b, TNF and CCL-2, as well as major pro-fibrotic mediators, such as TGF-β [8,9,10]. Importantly, these mediators lead to activation of hepatic stellate cells (HSCs), which constitute the principal fibrogenic cell type of the liver. Indeed, upon hepatic damage, HSCs become activated and transdifferentiate into an elongated population of myofibroblasts that produce large amounts of extracellular matrix (ECM) [11]. Continuous HSC activation in response to sustained hepatic damage results to excessive ECM accumulation causing liver fibrosis and scarring, a feature of chronic hepatic disorders including NASH [11].
Despite the multiple soluble mediators, which have been described to activate HSCs in a paracrine fashion provoking their differentiation into myofibroblasts, previous studies have shown that HSCs may also interact with other cells in a direct manner. For instance, HSCs express major histocompatibility molecules (MHC) of both class I and class II, as well as costimulatory molecules, such as CD86 [12,13]. Moreover, pro-inflammatory adhesion molecules, such as VCAM-1, are upregulated in HSCs under inflammatory conditions [14,15,16]. VCAM-1 represents a major counter-receptor for α4β1 integrin in different leukocytes [17,18,19]. In the liver, VCAM-1 in sinusoidal endothelial cells plays a role for leukocyte adhesion during NASH and contributes to fibrosis [20,21].
Interestingly, Lefere et al. reported that serum VCAM-1 levels predicted hepatic fibrosis in patients with NAFLD, indicating a potential role of VCAM-1 in the fibrotic pathogenesis of NASH [22]. Considering the special position of HSCs, which line the space of Disse, and previous findings that VCAM-1 is upregulated in these cells by different inflammatory triggers [14,15,16], we aimed here to investigate the role of VCAM-1 in HSCs for NASH development and progression. To this end, we utilized mice deficient for VCAM-1 in HSCs and appropriate control mice that were subjected to two different established models of diet-induced NASH. Our findings demonstrate that VCAM-1 in HSCs is dispensable for inflammation and fibrosis during NASH.
## 2.1. VCAM-1 Is Upregulated in the Liver during NASH and Expressed by Activated HSC
Leukocyte integrins have been implicated in fibrotic liver diseases [23]. Previous studies investigating the integrin ligand VCAM-1 focused on the hepatic endothelium [20,21], while only a few studies have mentioned the expression of VCAM-1 in HSCs, without providing any mechanistic evidence on its possible role in HSC function and HSC-related pathophysiology during NAFLD and NASH [14,15]. Therefore, we first fed wild-type mice with a control diet (ND) or a methionine-low, choline-deficient high-fat diet (HCD) for 6 weeks, as described in the Materials and Methods, to induce NASH, and we assessed the expression of α4 integrin, the receptor of VCAM-1, on different leukocyte subpopulations, utilizing flow cytometry. Expression of α4-integrin was upregulated upon NASH induction on monocytes, Kupffer cells and monocyte-derived macrophages (Figure 1A). Moreover, the mRNA expression of VCAM-1 was upregulated in the livers of NASH mice (Figure 1B).
As HSCs have been previously reported to express VCAM-1, and given its upregulation in the NASH liver, we next sought to investigate the expression and function of VCAM-1 in HSCs during NASH. To study VCAM-1 expression in activated HSCs during NASH, we first performed immunofluorescence stainings for VCAM-1 together with desmin, a marker of activated HSC, in liver sections from wild-type mice subjected to HCD-induced NASH (Figure 2A). Indeed, VCAM-1 showed substantial co-localization with desmin, thus confirming the presence of VCAM-1 in activated HSCs (Figure 2A). In order to further strengthen this finding, we applied the aforementioned staining strategy in liver sections from HSC-specific VCAM-1-deficient mice and control mice with floxed Vcam-1 (Cre+Vcam1f/f and Cre-Vcam1f/f, respectively) that received the HCD (Figure 2B). Quantification of the immunofluorescence analysis in the stained liver sections revealed that VCAM-1 expression in HSCs was significantly reduced in Cre+Vcam1f/f mice as compared to the Cre-Vcam1f/f mice (Figure 2B). Thus, this staining corroborated that VCAM-1 is expressed by activated HSCs and confirmed the sufficient deletion of VCAM-1 in HSCs in Cre+Vcam1f/f mice (Figure 2B). In addition, both mRNA and protein expression of VCAM-1, as assessed by qPCR and Western Blot analysis, respectively, were significantly reduced in livers of HCD-fed Cre+Vcam1f/f mice, as compared to Cre-Vcam1f/f mice (Figure 2C,D).
## 2.2. VCAM-1 in HSCs Is Dispensable for NASH Development
Next, in order to study the potential role of VCAM-1 in HSCs for the development and progression of NASH, a comprehensive analysis of the livers of Cre-Vcam1f/f and Cre+Vcam1f/f mice was performed. Despite the expression of VCAM-1 in activated HSCs in NASH, HSC-specific VCAM-1 deficiency did not affect the grade of steatosis and fibrosis upon HCD feeding (Figure 3A,B).
To assess NASH-related inflammation, we analyzed leukocyte populations by flow cytometry. No differences between HCD-fed Cre-Vcam1f/f and Cre+Vcam1f/f mice were observed with regards to the numbers of hepatic total leukocytes, neutrophils, Kupffer cells, monocyte-derived macrophages and infiltrating monocytes, as evaluated by flow cytometry analysis (Figure 4A). Moreover, quantitative PCR analysis of the expression of genes related to inflammation (Tnf, Il6, Il1b) and fibrosis (Tgfb1, Acta2, Col1a1, Desmin, Timp1) did not reveal any differences due to HSC-specific VCAM-1 deficiency (Figure 4B).
As, in the HCD-NASH model, liver pathology develops owing to choline deficiency in the diet, we next engaged a second model of NASH, in which pathology develops in a different fashion. In particular, we used a 12-week western diet with high sugar supplementation in the water in conjunction with CCl4 administration to accelerate fibrosis development (Figure 5A); this model was recently shown to mimic histological and transcriptomic characteristics of human NASH [24]. There was no difference in liver weight between Cre-Vcam1f/f and Cre+Vcam1f/f at the end of the feeding period. Systemic metabolism, as assessed by the levels of fasting glucose and triglycerides, was also not different between the two strains (Figure 5B,C). Consistent with the findings from the HCD-model, neither steatosis nor fibrosis was altered in HSC-specific VCAM-1 deficient mice, as compared to the control mice (Figure 5D,E).
Moreover, analysis of the inflammatory milieu of the liver of Cre-Vcam1f/f and Cre+Vcam1f/f mice by flow cytometry displayed no differences in the numbers of hepatic total leukocytes, neutrophils, Kupffer cells, monocyte-derived macrophages and infiltrating monocytes (Figure 6A). Furthermore, expression of genes related to inflammation (Tnf, Il6, Il1b) and fibrosis (Tgfb1, Acta2, Col1a1, Desmin, Timp1) was also not affected by HSC-specific VCAM-1 deficiency (Figure 6B). Together, VCAM-1 in HSCs does not contribute to liver steatosis, inflammation or fibrosis development in the course of NAFLD/NASH, as assessed in two different experimental models.
## 3. Discussion
HSCs are the cellular mediators of fibrosis during NASH via their differentiation from their quiescent state to activated HSCs and myofibroblasts [9,25]. Their activation is mediated by soluble mediators such as IL-1 and TNF, secreted by hepatocytes and several populations of immune cells, as well as major fibrosis-promoting factors such as TGF-β, expressed mainly by activated macrophages, both Kupffer cells and infiltrating monocyte-derived macrophages [9,26,27]. In contrast, less information exists about the role adhesion receptors, such as VCAM-1, may play in the context of HSC activation and transdifferentiation into myofibroblasts, although previous studies have reported upregulation of VCAM-1 in activated HSCs [14,15,16].
This prompted us to study the role of VCAM-1 in HSCs during NASH. We hypothesized that VCAM-1 in HSCs could regulate the accumulation of leukocyte populations in the liver microenvironment during NASH, or regulate intracellular signaling processes involved in HSC transdifferentiation into myofibroblasts. In line with this hypothesis, we have previously shown that adhesion of macrophages onto adipocytes, which are also cells of mesenchymal origin, in a manner that involved adipocyte VCAM-1 expression, can modulate their functional properties [19]. Herein, we first observed an upregulation of VCAM-1 expression in livers from NASH mice as compared to control mice, accompanied by upregulation of α4 integrin, the receptor of VCAM-1, on monocytes, Kupffer cells and monocyte-derived macrophages. By immunofluorescence analysis of liver sections we confirmed the presence of VCAM-1 in HSCs, utilizing desmin as a pan-HSC marker. It should be noted that other markers, such as a-SMA, which is specific for activated-HSCs, were not used in the present study. However, as the model of HCD-induced NASH in mice displays extensive liver fibrosis [21,28,29,30], the majority of HSCs have likely acquired an activated state; hence, our co-staining of liver sections for VCAM-1 and the pan-HSC marker desmin suggests the presence of VCAM-1 on activated HSCs. Previous studies have reported an upregulation of VCAM-1 in the liver and specifically in HSCs under inflammatory conditions, e.g., upon LPS administration or CCl4-induced fibrosis [14,15,16]. Importantly, TLR-4 activation of HSCs led to VCAM-1 upregulation [16]. However, the function of VCAM-1 in HSCs was not studied under NASH conditions so far.
Despite the interesting finding that VCAM-1 expression in HSCs was enhanced during NASH, HSC-specific VCAM-1 deficient mice did not display any differences in steatosis, inflammation and fibrosis, compared to the control mice, as assessed by histology, flow cytometry and gene expression studies in the HCD-induced model. The HCD model is nowadays widely utilized for NASH studies [21,28,29,30]. We further confirmed our findings by subjecting HSC-specific VCAM-1 deficient and control mice to a second model of NASH induction, which is of longer duration as compared to the HCD, while mimicking several aspects of human NASH [24]. The absence of difference in the phenotype of Cre+Vcam1f/f mice as compared to the Cre-Vcam1f/f ones upon NASH induction in the latter model confirmed that VCAM-1 in HSC is dispensable for the progression of the disease. It is possible that other adhesion molecules expressed in HSCs may have compensated for the absence of HSC VCAM-1 in the Cre+Vcam1f/f mice; thus a potential function of HSC VCAM-1 in NASH cannot be entirely excluded. Additionally, we can conclude that HSC VCAM-1 is dispensable for disease development and progression only in the two NASH models used. We cannot exclude that HSC VCAM-1 could play a role in a NASH or liver fibrosis model different from the two models used here. On the contrary, VCAM-1 on LSEC has a role for the accumulation of leukocytes during NASH, thereby accelerating hepatic inflammation and the progression of the disease [20,21].
Although serum VCAM-1 levels correlate with hepatic fibrosis in patients with NAFLD [22], a finding that may be linked with the upregulation of VCAM-1 in activated HSCs, as identified here, our functional results suggest that VCAM-1 in HSCs does not play a pathophysiological role in fibrosis progression. Hence, future studies should interrogate the utilization of VCAM-1 as a biomarker for NASH progression. Moreover, while no function of HSC VCAM-1 in liver fibrosis was found here, we cannot exclude that VCAM-1 in other cells could be a therapeutic target in NASH. These aspects should be addressed in future studies.
## 4.1. Animal Studies
Wild-type mice (C57BL/6) were from Charles River (Sulzfeld, Germany). Hepatic stellate cell-specific deletion of Vcam1 was achieved by crossing mice carrying a floxed Vcam1 allele (Jackson Laboratories, Bar Harbor, ME, USA) with mice in which Cre recombinase expression is driven by the mouse Lecithin:retinol acyltransferase (LRAT) promoter [31]. Wild-type mice were fed a normal chow diet (ND) as control or fed a methionine-low, choline-deficient high-fat diet (HCD, $60\%$ kcal from fat, $0.1\%$ methionine, A06071302, Research Diets) [28,29,32]. Lrat-Cre negative Vcam1 floxed/floxed and Lrat-Cre positive Vcam1 floxed/floxed mice (designated Cre-Vcam1f/f and Cre+Vcam1f/f, respectively) were fed the HCD.
In other experiments, Cre-Vcam1f/f and Cre+Vcam1f/f mice were fed a western diet, specifically a high fat, high fructose, and high cholesterol diet ($21.1\%$ fat, $41\%$ sucrose, and $1.25\%$ cholesterol, Teklad diets, TD. 120528) together with water including high sugar concentrations [23.1 g/L D-Fructose (SERVA, Heidelberg, Germany, 21830) and 18.9 g/L D-Glucose (Sigma-Aldrich, Taufkirchen, Germany, G8270)] for 12 weeks. In addition, the mice received weekly an intraperitoneal low dose of carbon tetrachloride (CCl4, Sigma-Aldrich, 289116, 0.32 µg/g of body weight) as an accelerator of liver fibrosis [24]. After 11 weeks of feeding and upon overnight fasting, blood glucose levels were measured in tail vein blood samples with a glucose meter device (Accu-Chek, Roche, Vienna, Austria), while the levels of triglycerides were measured with an Accutrend Plus system (Roche).
Mice were housed on a standard 12 h light/12 h dark cycle under specific pathogen-free conditions. Eight to ten week old male mice were used in experiments. At the end of the feeding period, mice were euthanized, undergoing also systemic perfusion with phosphate-buffered saline (PBS), and tissues were collected for further analysis. Animal experiments were approved by the Landesdirektion Sachsen, Germany and by the Region of Attica, Greece.
## 4.2. Histological Analysis
Mouse livers were isolated from euthanized mice and fixed with $4\%$ PFA for 24 h. For Hematoxylin and Eosin (H&E) staining, liver samples were embedded in paraffin, and cut liver sections were deparaffinized and rehydrated. The sections were stained with Mayers Haematoxylin (SAV, Liquid Production GmbH, Flintsbach am Inn, Germany, 10231.02500) and Eosin (Klinikapotheke Universitätsklinikum, Dresden, Germany) and mounted with VectaMount (Vector Laboratories, Newark, CA, USA, H-5000-60) after a series of ethanol washings ($80\%$, $95\%$, $100\%$). For Picrosirius red staining, deparaffinized and rehydrated liver sections were stained with Picrosirius red solution (Sigma-Aldrich, 365548) for 1 h and then washed with $1\%$ acetic acid. The liver sections were mounted with VectaMount after a series of ethanol washing as before. Images were acquired utilizing a ZEISS Axio Observer Z1-computerized microscope and Picrosirius red positive areas per field of vision were quantified from at least 12 images per mouse using the Fiji software (ImageJ $\frac{2.1.0}{1.53}$c).
For immunofluorescence staining, fixed liver samples were embedded in OCT upon incubation with a series of sucrose solutions ($10\%$, $20\%$, $30\%$) to achieve tissue cryoprotection. Liver sections were dried and permeabilized with $0.1\%$ Triton X-100 and then blocked using a serum-free protein block solution (Dako, Waldbronn, Germany, X090930-2). Liver sections were then incubated with primary antibodies against VCAM-1 (1:10, eBioscience, Darmstadt, Germany, # 14-1061-85) and desmin (1:100, Abcam, Berlin, Germany, ab32362) overnight at 4 °C. After washing with PBS, sections were incubated with secondary antibodies, namely Donkey anti-Rat (H + L) Alexa Fluor 555 (Abcam, ab150150) and Donkey anti-rabbit (H + L) Alexa Fluor 647 (Invitrogen, Darmstadt, Germany, A-31573) for 90 min at RT and counterstained with DAPI (Sigma-Aldrich, D9542). To reduce tissue autofluorescence, sections were incubated with TrueBlack® Lipofuscin Autofluorescence Quencher (Biotium, Fremont, CA, USA, #23007) for 30 s and mounted. Images were acquired with a ZEISS Axio Observer Z1 computerized microscope equipped with the Zen 3.2 (Blue edition) software. Images are shown in pseudocolor; the display color of the channels was set as to optimize clarity of merged images.
## 4.3. Flow Cytometry Analysis
The left lobe of the liver was isolated, minced and digested in high glucose DMEM containing $0.5\%$ BSA, collagenase D (1.5 mg/mL, COLLD-RO, Roche), and DNaseI (5U/mL, 04716728001, Roche) for 1 h at 37 °C with shaking. The cell suspension was filtered through a 100 µm cell strainer and centrifuged at 600× g for 7 min at 4 °C. Afterwards the red blood cells were lysed using RBC Lysis Buffer (eBioscience, 00-4300-54) for 5 min at room temperature. Additionally, cell debris were removed by utilizing a debris removal solution (Miltenyi Biotec, Bergisch Gladbach, Germany, 130-109-398).
For analysis of α4 integrin expression in hepatic immune cells, upon debris removal the cells were incubated with mouse CD45 Microbeads (Miltenyi Biotec, 130-052-301) for 15 min at 4 °C and CD45+ cells were collected by LS column (Miltenyi Biotec, 130-042-401). Then, they were stained with antibodies against CD11b (M$\frac{1}{70}$, Biolegend, San Diego, CA, USA, 101230), SiglecF (E50-2440, BD Biosciences, Heidelberg, Germany, 562680), Ly6G (1A8, Biolegend, 127624), F$\frac{4}{80}$ (BM8, eBioscience, 25-4801-82), Ly6C (AL-21, BD Biosciences, 553104), α4 integrin/CD49d (9C10, Biolegend, 103706), and Hoechst 33258 (Invitrogen, H1398). For the analysis of hepatic innate immune cells derived from livers of HCD-fed Cre-Vcam1f/f and Cre+Vcam1f/f mice, CD45+ cells, isolated as described above, were stained with antibodies against CD11b (M$\frac{1}{70}$, Invitrogen, 12-0112-82), SiglecF (E50-2440, BD Biosciences, 562680), Ly6G (1A8, Biolegend, 127624), F$\frac{4}{80}$ (BM8, eBioscience, 25-4801-82), Ly6C (AL-21, BD Biosciences, 553104), CD45 (30-F11, Biolegend, 103130), and Hoechst 33258 (Invitrogen, H1398). Stained cells were analyzed on a BD FACSCanto™ II cytometer (BD Biosciences) and analyzed by FlowJo software (v10.1r7).
For analyzing the hepatic innate immune cells acquired from Cre-Vcam1f/f and Cre+Vcam1f/f mice, which were fed a western diet combined with CCl4 treatment, upon debris removal, the cells were stained with antibodies against CD45 (30-F11, Biolegend, 103133), CD11b (M$\frac{1}{70.15}$, Invitrogen, RM2804), Ly6G (1A8, BD Biosciences, 560599), F$\frac{4}{80}$ (BM8, eBioscience, 25-4801-82), Ly6C (AL-21, BD Biosciences, 553104). Stained cells were run on an ARIA III cytometer (BD Biosciences) and analyzed by FlowJo software.
## 4.4. Gene Expression Analysis
Liver tissues were snap frozen in liquid nitrogen or kept in RNAlater (Invitrogen, AM7020). The liver samples were homogenized in TriReagent (MRC, Cincinnati, OH, USA, TR 118) by using the Precellys 24 tissue homogenizer and after phase separation, the RNA was precipitated using $75\%$ ethanol. Finally, RNA was isolated by NucleoSpin® RNA kit (Macherey-Nagel, Dueren, Germany, 740955.250) and reverse-transcribed with the iScript cDNA Synthesis Kit (Bio-Rad, Feldkirchen, Germany, 1708890). The qPCR was performed utilizing the SsoFast™ EvaGreen® Supermix (Bio-Rad, 1725204) and gene-specific primers on a CFX384 Real-time PCR detection system (Bio-Rad). Relative mRNA expression levels were calculated according to the ΔΔCt method upon normalization to 18S [33]. The primers used in this study are:Vcam1 (F:CTTCCCAGAACCCTTCTCAG, R:GGGACCATTCCAGTCACTTC)Tnf (F:AGCCCCCAGTCTGTATCCTTCT, R:AAGCCCATTTGAGTCCTTGATG),Il1b (F:ATCCCAAGCAATACCCAAAG, R:GTGCTGATGTACCAGTTGGG),Il6 (F:CCTTCCTACCCCAATTTCCAAT, R:AACGCACTAGGTTTGCCGAGTA),Tgfb1 (F:CACAATCATGTTGGACAACTGCTCC, R:CTTCAGCTCCACAGAGAAGAACTGC),Col1a1 (F:GAGCGGAGAGTACTGGATCG, R:GCTTCTTTTCCTTGGGGTTC),Desmin (F:GTGGATGCAGCCACTCTAGC, R:TTAGCCGCGATGGTCTCATAC),Acta2 (F:ACTGGGACGACATGGAAAAG, R:GTTCAGTGGTGCCTCTGTCA)Timp1 (F:TACACCCCAGTCATGGAAAGC, R:CGGCCCGTGATGAGAAACT)18S (F:GTTCCGACCATAAACGATGCC, R:TGGTGGTGCCCTTCCGTCAAT)
## 4.5. Immunoblot Analysis
Liver tissues were snap frozen in liquid nitrogen and homogenized in RIPA lysis buffer (Santa Cruz, Heidelberg, Germany, sc-24948A) containing a protease and phosphatase inhibitor cocktail (Roche, 04693159001, CO-RO) by using the Precellys evolution homogenizer (Bertin Technologies, Frankfurt am Main, Germany) and then centrifuged at 14,000× g for 20 min at 4 °C. The supernatant was collected and protein concentrations were determined by using a BCA protein assay kit (Thermo Scientific, Darmstadt, Germany, 23227). The protein samples (30 µg) were separated on a NuPAGE™ 4–$12\%$ gel (Thermo Scientific, NP0323BOX) and transferred to a PVDF membrane (Bio-Rad, 1620177). The membrane was blocked with $5\%$ skim milk for 1 h at RT and then incubated with primary antibody against VCAM-1 (Abcam, ab134047) overnight at 4 °C, followed by incubation with appropriate secondary antibody. After membrane stripping using a Restore Western Blot Stripping-Buffer (Thermo Scientific, 21059), the membrane was blocked again with $5\%$ skim milk for 1 h at RT and then incubated with antibody against Vinculin (Cell signalling, Leiden, The Netherlands, 4650) overnight at 4 °C, followed by incubation with appropriate secondary antibody. Detection in each case was performed with The Super Signal West Pico PLUS Chemiluminescent substrate (Thermo Scientific, 34579) in a VILBER imaging system (FUSION FX, Eberhardzell, Germany). Densitometry was performed by using the Fiji software.
## 4.6. Statistical Analysis
A two-tailed Student’s t-test or a Mann–Whitney U test was used. The Graph Pad Prism 8 software was used and significance was set at $p \leq 0.05.$ |
# Association between spironolactone use and COVID-19 outcomes in population-scale claims data: a retrospective cohort study
## Abstract
### Background:
Spironolactone has been proposed as a potential modulator of SARS-CoV-2 cellular entry. We aimed to measure the effect of spironolactone use on the risk of adverse outcomes following COVID-19 hospitalization.
### Methods:
We performed a retrospective cohort study of COVID-19 outcomes for patients with or without exposure to spironolactone, using population-scale claims data from the Komodo Healthcare Map. We identified all patients with a hospital admission for COVID-19 in the study window, defining treatment status based on spironolactone prescription orders. The primary outcomes were progression to respiratory ventilation or mortality during the hospitalization. Odds ratios (OR) were estimated following either 1:1 propensity score matching (PSM) or multivariable regression. Subgroup analysis was performed based on age, gender, body mass index (BMI), and dominant SARS-CoV-2 variant.
### Findings:
Among 898,303 eligible patients with a COVID-19-related hospitalization, 16,324 patients ($1.8\%$) had a spironolactone prescription prior to hospitalization. 59,937 patients ($6.7\%$) met the ventilation endpoint, and 26,515 patients ($3.0\%$) met the mortality endpoint. Spironolactone use was associated with a significant reduction in odds of both ventilation (OR 0.82; $95\%$ CI: 0.75-0.88; $p \leq 0.001$) and mortality (OR 0.88; $95\%$ CI: 0.78-0.99; $$p \leq 0.033$$) in the PSM analysis, supported by the regression analysis. Spironolactone use was associated with significantly reduced odds of ventilation for all age groups, men, women, and non-obese patients, with the greatest protective effects in younger patients, men, and non-obese patients.
### Interpretation:
Spironolactone use was associated with a protective effect against ventilation and mortality following COVID-19 infection, amounting to up to $64\%$ of the protective effect of vaccination against ventilation and consistent with an androgen-dependent mechanism. The findings warrant initiation of large-scale randomized controlled trials to establish a potential therapeutic role for spironolactone in COVID-19 patients.
## INTRODUCTION
The continued proliferation of vaccine-evading severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) strains has reinforced the need for outpatient treatments to mitigate the clinical course of coronavirus disease 2019 (COVID-19).1 *While a* small number of antiviral therapies have received Food and Drug Administration approval in COVID-19, such treatments remain limited in both adoption and efficacy, owing to concerns about adverse reactions, drug-drug interactions, and cost.2,3 Consequently, it remains critical to identify any existing medications that may modulate the course of infection.
The potassium-sparing diuretic spironolactone has been proposed as a potential modulator of SARS-CoV-2 infection due to its interactions with multiple COVID-19-associated signaling pathways.4 Spironolactone functions chiefly as a mineralocorticoid receptor blocker, antagonizing the final stage of the renin-angiotensin-aldosterone system (RAAS).5 Given the involvement of angiotensin-converting enzyme 2 (ACE2), the canonical host receptor for SARS-CoV-2, in RAAS activity, mineralocorticoid antagonists have been hypothesized to alter ACE2 expression, which has been observed in vitro.6,7 In addition to its anti-mineralocorticoid effects, spironolactone is a strong inhibitor of the androgen receptor.8 The critical role of androgen signaling in upregulating TMPRSS2, which facilitates Spike processing during membrane fusion, suggests that spironolactone’s anti-androgenic activity could likewise impede viral entry.9 Existing clinical evidence for a protective role of spironolactone in COVID-19 is encouraging but inconclusive. One case-control study of 6,462 patients with liver cirrhosis in South Korea revealed a significant negative association between spironolactone use and COVID-19 diagnosis.10 A non-randomized, comparative study of bromhexine-spironolactone combination therapy in 103 patients identified a statistically significant $13\%$ reduction in hospitalization time for the intervention group.11 The only published randomized, controlled clinical trial of spironolactone in COVID-19, to our knowledge, was a trial of sitagliptin-spironolactone combination therapy in 263 patients, which suggested a potentially beneficial effect for the intervention group with respect to clinical progression score.12 To determine whether spironolactone use is associated with COVID-19 severity, we conducted the largest clinical investigation of spironolactone in COVID-19 to date. Using health insurance claims data from public and private payers covering over 325 million unique patients, we performed a retrospective cohort study of COVID-19 outcomes for spironolactone users.
## Study design and population
We conducted a retrospective cohort study based on deidentified medical and pharmaceutical data from the Komodo Healthcare Map, a collection of health insurance claims from public and private payers nationwide. The database contains claims data for approximately 325 million unique patients in the United States since October 1, 2015 and is closely aligned with the National Health Interview Survey population in terms of geography and demographics. The dataset encompassed medical claims, pharmacy claims, enrollment records, and mortality records.
We identified 909,531 patients in the database who experienced a hospitalization due to COVID-19 within the study window, spanning March 1, 2020 to June 3, 2022. In the event that a patient experienced multiple COVID-19-related hospitalizations, only the first encounter was considered. Patients under the age of 15 years were also excluded. Medical variables were defined using International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) codes, procedures were defined using ICD-10 Procedure Coding System (ICD-10-PCS), Current Procedural Technology (CPT), and Healthcare Common Procedure Coding System (HCPCS) codes, and drug prescriptions were defined using National Drug Code (NDC) identifiers. The study followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines.
## Ethics committee oversight
The study was declared exempt from institutional review board (IRB) review by the Stanford University IRB.
## Exposures and outcomes
Exposure was defined as a prescription for spironolactone within a 180-day window prior to the COVID-19-related hospitalization claim date.13,14 Only paid prescriptions, implying patient receipt of the medication, were considered, and multi-ingredient drugs were not included. The primary study outcome was progression to ventilation, defined as a claim for a respiratory ventilation procedure during the COVID-19-related hospitalization. We also considered mortality as an additional endpoint, which was defined as death recorded within the period covered by the COVID-19-related hospitalization claim. Time-stationarity of outcome variables was measured by Pearson correlation between endpoint probability and month of admission, beginning April 2020.
## Study variables
The study design controlled for demographic, medical, and pharmaceutical covariates. Demographic information included age as a continuous variable and gender. Medical covariates included body mass index (BMI ≥30 kg/m2 or <30 kg/m2), myocardial infarction, congestive heart failure, peripheral vascular disease, dementia, chronic pulmonary disease, rheumatic disease, peptic ulcer disease, mild liver disease, moderate or severe liver disease, diabetes without chronic complications, diabetes with chronic complications, hemiplegia or paraplegia, renal disease, malignancy, metastatic solid tumor, and human immunodeficiency virus (HIV) or acquired immunodeficiency syndrome (AIDS). For each condition, corresponding ICD-10-CM codes were obtained from the updated Charlson Comorbidity Index definitions (Supplementary Table S1).15 Pharmaceutical covariates included a paid prescription within a 180-day window prior to the COVID-19-related hospitalization for atorvastatin, levothyroxine, metformin, lisinopril, amlodipine, metoprolol, albuterol, omeprazole, losartan, or gabapentin. COVID-19 vaccination status was an additional covariate, defined as receipt of at least one dose of any COVID-19 vaccine prior to the COVID-19-related hospitalization. Values are reported as median (interquartile range [IQR]) for continuous variables and frequency (percent) for categorical variables.
## Propensity score matching
We performed propensity score matching (PSM) to obtain matched pairs of drug-exposed and non-drug-exposed patients.16 Propensity scores were derived by fitting a logistic regression model with L2 regularization to predict drug exposure status using all study covariates, normalized to unity. Nearest-neighbor matching on propensity scores was performed without a caliper to generate 1:1 matched pairs. Covariate balance between drug-exposed and non-drug-exposed groups was assessed by calculating the standardized bias for each covariate, with a standardized bias of less than 0.1 considered balanced (Supplementary Table S2).17 Odds ratios (OR) between drug exposure and each outcome of interest, as well as corresponding $95\%$ confidence intervals ($95\%$ CI) and p-values, were calculated using McNemar’s exact test.
## Regression model
We also fit a second model to control for covariates without matching. We fit an L1-regularized logistic regression (LR) model using the same explanatory variables as in the propensity score derivation, with the addition of drug exposure status. OR were calculated from coefficients of the fitted model, and confidence intervals and p-values for each OR were calculated from the corresponding t statistics.
## Subgroup and sensitivity analysis
We measured treatment effects in patient subgroups, grouping by male gender, female gender, obesity (BMI≥30 kg/m2), non-obesity (BMI<30 kg/m2), and age brackets (<60, 60-74, and ≥75 years). Additionally, we analyzed cases in time periods with predominance of specific variants in the US, including the Delta (July 1, 2021 to December 20, 2021) and Omicron (December 20, 2021 to June 3, 2022) strains.18 We ran additional sensitivity analyses considering alternate windows of drug exposure (90 days and 360 days).
## Computational resources
Bulk data queries were performed using Structured Query Language (SQL) in a Snowflake workspace (Snowflake Inc., Bozeman, MT). All statistical analyses were performed in a Python 3.10 environment using the scikit-learn (version 1.1.2), statsmodels (version 0.13.2), psmpy (version 0.3.5), NumPy (version 1.23.2), and pandas (version 1.4.3) packages.
## Role of the funding source
No study sponsor had any role in the design of the study; in the collection, analysis, or interpretation of the data; in the composition of the manuscript; or in the decision to submit the manuscript for publication.
## RESULTS
From the database, we identified 909,531 patients with a COVID-19-related hospitalization, of whom 11,228 patients were excluded due to age below the study minimum ($$n = 11$$,206) or missing information for gender ($$n = 22$$; Figure 1). Within the final study population of 898,303, the treatment group comprised 16,324 patients with a fulfilled prescription for spironolactone prior to hospitalization.
The study population had a median age of 64.9 (IQR 56.5-78.6) years, and 465,124 ($51.8\%$) were women. 59,937 patients ($6.7\%$) met the ventilation endpoint, and 26,515 patients ($3.0\%$) met the mortality endpoint. Endpoints were time-stationary, with no significant correlation between event frequency and claim month over the study window ($$p \leq 0.338$$ for ventilation; $$p \leq 0.248$$ for mortality). Following propensity score matching, all covariates were well balanced between treatment and control groups, with standardized biases of less than 0.1 in all cases (Table I).
Following 1:1 propensity score matching, 1,212 of 16,324 patients ($7.4\%$) in the spironolactone treatment group met the ventilation endpoint in aggregate, while 1,459 of 16,324 patients ($8.9\%$) in the control group met the endpoint (Table II). In the paired analysis, the OR for ventilation between treatment and controls was 0.82 ($95\%$ CI: 0.75-0.88; $p \leq 0.001$). The unmatched logistic regression analysis supported this protective effect, finding an OR of 0.78 ($95\%$ CI: 0.73-0.83; $p \leq 0.001$).
Spironolactone treatment was also associated with a protective effect for mortality, with 521 of 16,324 patients ($3.2\%$) in the treatment group and 592 of 16,324 patients ($3.6\%$) in the matched control group. In the paired analysis, this corresponded to an OR of 0.88 ($95\%$ CI: 0.78-0.99; $$p \leq 0.033$$), which was supported by similar findings in the regression analysis (OR 0.85; $95\%$ CI: 0.78-0.93; $p \leq 0.001$).
As a study-level control, we also measured the effect of vaccination on both endpoints, finding strongly protective effects in all cases. For ventilation, the OR for vaccination was 0.72 ($95\%$ CI: 0.69-0.76; $p \leq 0.001$) in the paired analysis and 0.68 ($95\%$ CI: 0.66-0.71; $p \leq 0.001$) in the regression analysis. For mortality, vaccination was associated with OR values of 0.62 ($95\%$ CI: 0.58-0.67; $p \leq 0.001$) and 0.61 ($95\%$ CI: 0.57-0.64; $p \leq 0.001$) in the paired and regression analyses, respectively. These findings are consistent with previous estimates of the protective effect of vaccination in hospitalized COVID-19 patients.19,20 We next analyzed treatment effects in predefined patient subgroups (Table III). For the ventilation endpoint, we observed a stronger protective treatment effect in men than in the general population, with an OR of 0.76 ($95\%$ CI: 0.67-0.85; $p \leq 0.001$) in the matched analysis and 0.72 ($95\%$ CI: 0.66-0.79; $p \leq 0.001$) in the regression analysis. The effect in women was weaker than in the general population, with estimated OR values of 0.87 ($95\%$ CI: 0.77-0.97; $$p \leq 0.012$$) and 0.84 ($95\%$ CI: 0.77-0.92; $p \leq 0.001$) in the matched and regression analyses, respectively. In non-high-BMI patients, treatment was associated with a more protective effect than in the general population, with an OR of 0.81 ($95\%$ CI: 0.73-0.89; $p \leq 0.001$) in the matched analysis and 0.74 ($95\%$ CI: 0.69-0.80) in the regression analysis. The estimated treatment effect was greatly reduced in high-BMI patients and did not meet significance in the matched analysis (OR 0.93; $95\%$ CI: 0.81-1.07; $$p \leq 0.352$$), although nominally significant in the regression analysis (OR 0.89; $95\%$ CI: 0.80-0.99; $$p \leq 0.026$$). Treatment was associated with a protective effect in all age brackets. However, it was most protective in the youngest bracket (<60 years old) and least protective in the oldest bracket (≥75 years old), with estimated OR of 0.78 ($95\%$ CI: 0.67-0.91; $$p \leq 0.002$$) and 0.81 ($95\%$ CI: 0.67-0.97; $$p \leq 0.028$$), respectively, in the paired analysis. The protective effect was also diminished for hospitalizations during the Omicron wave, with an OR of 0.91 ($95\%$ CI: 0.74-1.11; $$p \leq 0.37$$) in the paired analysis and 0.80 ($95\%$ CI: 0.69-0.93; $$p \leq 0.004$$) in the regression analysis.
The lower frequency of events for the mortality endpoint limited our ability to detect significant effects in subgroups for this endpoint. In the subgroup analysis for mortality (Supplementary Table S3), only males (OR 0.83; $95\%$ CI: 0.70-0.98; $$p \leq 0.029$$) and the 60-74 age bracket (OR 0.80; $95\%$ CI: 0.67-0.96; $$p \leq 0.017$$) met significance in the PSM analysis.
We also conducted sensitivity analyses to assess the effect of parameter selection in our analysis (Table IV). For the ventilation endpoint, treatment remained significantly associated with improved outcomes in all sensitivity analyses, including using a 90-day window (OR 0.82; $95\%$ CI: 0.75-0.90; $p \leq 0.001$) and a 360-day window (OR 0.89; $95\%$ CI: 0.83-0.95; $$p \leq 0.001$$) for drug exposure. For the mortality endpoint, significantly protective treatment effects were likewise observed for both the 90-day (OR 0.81; $95\%$ CI: 0.70-0.93; $$p \leq 0.002$$) and 360-day (OR 0.85; $95\%$ CI: 0.76-0.94; $$p \leq 0.003$$) windows.
## DISCUSSION
Our results, supported by the largest cohort study of spironolactone in COVID-19 to date, suggest that spironolactone may improve outcomes in patients hospitalized with COVID-19. In our study, spironolactone use was associated with an $18\%$ reduction in odds of ventilation following admission for COVID-19. This effect was more pronounced in men and in younger patients (15-59 years old), where the effects corresponded to a $22\%$ and $24\%$ reduction in ventilation odds, respectively. In contrast, high BMI (≥30 kg/m2) diminished the observed treatment effect. Spironolactone use was also associated with a significant $12\%$ reduction in odds of mortality. In our analysis, the protective effect of spironolactone amounted to $64\%$ and $32\%$ of the protective effect of COVID-19 vaccination against ventilation and mortality, respectively.
Several previous studies have investigated a possible protective effect for spironolactone in COVID-19, with encouraging but underpowered results. The strongest evidence, prior to our study, was a randomized controlled trial of spironolactone-sitagliptin combination therapy in 263 patients, which demonstrated a significant improvement in a subjective clinical progression score.12 While rates of mortality, intensive care unit admission, intubation, and end-organ damage were reduced in this trial, the effect did not meet statistical significance. A non-randomized trial of another combination therapy, spironolactone with bromhexine, reported faster time to temperature normalization and hospital discharge in the treatment group.11 The largest study prior to this work was a case-control study of 6,462 patients that identified an $80\%$ reduction in odds of spironolactone exposure in COVID-19 patients compared to matched controls, although this study was restricted to patients with liver cirrhosis.10 Spironolactone’s dual role as a RAAS modulator and androgen antagonist provides several plausible mechanisms for inhibition of viral entry. While clinical investigations have not demonstrated a clear relationship between RAAS modulators, such as ACE inhibitors and angiotensin II receptor blockers (ARBs), and clinical outcomes in COVID-19, there is extensive evidence that androgen signaling plays a meaningful role in viral entry.9,21-23 For instance, anti-androgenic drugs have been associated with protective effects in observational studies, and androgen antagonism inhibits SARS-CoV-2 cellular entry in vitro.22,24,25 Intriguingly, our study noted a consistent relationship between effect size and androgen levels in patient subgroups, consistent with a protective effect of spironolactone mediated by inhibition of androgen signaling. For instance, the observed effects were stronger in male, non-obese, and younger patients, all of whom tend to have higher androgen levels than their demographic counterparts.26-28 We also observed a smaller protective effect in hospitalizations during the predominant time period of the Omicron variant, which has been observed to rely less on androgen-dependent pathways for cellular entry.29 While we did not have access to laboratory data to measure a potential relationship between spironolactone effect and androgen levels directly, our results are consistent with such an association.
Our study has several limitations. Although we employ well-characterized causal inference methods like propensity score matching, the observational nature of the study precludes direct causal reasoning. Furthermore, while we control for a large collection of medical and pharmacologic covariates, unmodeled confounders may still exist that could limit the generalizability of our results. Furthermore, claims data only captures information regarding prescription issue and fulfillment and does not guarantee adherence to treatment in our control group. Claims data may also be more vulnerable to incomplete or changing coding practices than institutional medical records data. We were also unable to measure dose response in our dataset, although spironolactone dose variability is generally low.30 In conclusion, we show that spironolactone use is associated with improved outcomes following COVID-19 hospitalization in a nationwide cohort of nearly a million hospitalized patients. Treatment was associated with lower odds of ventilation and mortality compared to matched and unmatched controls. Furthermore, the protective effect on ventilation in patient subgroups was consistent with an androgen-dependent mechanism. Our findings support the initiation of well-powered randomized controlled trials to determine the clinical value of spironolactone in the treatment of COVID-19. |
# Apelin Enhances the Effects of Fusobacterium nucleatum on Periodontal Ligament Cells In Vitro
## Abstract
This study aimed to explore effects of *Fusobacterium nucleatum* with or without apelin on periodontal ligament (PDL) cells to better understand pathomechanistic links between periodontitis and obesity. First, the actions of F. nucleatum on COX2, CCL2, and MMP1 expressions were assessed. Subsequently, PDL cells were incubated with F. nucleatum in the presence and absence of apelin to study the modulatory effects of this adipokine on molecules related to inflammation and hard and soft tissue turnover. Regulation of apelin and its receptor (APJ) by F. nucleatum was also studied. F. nucleatum resulted in elevated COX2, CCL2, and MMP1 expressions in a dose- and time-dependent manner. Combination of F. nucleatum and apelin led to the highest ($p \leq 0.05$) expression levels of COX2, CCL2, CXCL8, TNF-α, and MMP1 at 48 h. The effects of F. nucleatum and/or apelin on CCL2 and MMP1 were MEK$\frac{1}{2}$- and partially NF-κB-dependent. The combined effects of F. nucleatum and apelin on CCL2 and MMP1 were also observed at protein level. Moreover, F. nucleatum downregulated ($p \leq 0.05$) the apelin and APJ expressions. In conclusion, obesity could contribute to periodontitis through apelin. The local production of apelin/APJ in PDL cells also suggests a role of these molecules in the pathogenesis of periodontitis.
## 1. Introduction
Periodontitis is a chronic inflammatory disease mainly caused by a subgingival dysbiotic microbiota whose balance is shifted by several factors [1]. Additionally, there is also a dysbiotic status between the host and the subgingival microbiota in periodontitis. Hyperinflammatory immune responses of the host to this microbiota can lead to alveolar bone resorption and eventually tooth loss [1]. Risk factors such as smoking or genetic predisposition can contribute to the initiation and progression of periodontitis [2]. There is strong evidence that periodontitis is associated with systemic diseases and conditions, such as diabetes mellitus, cardiovascular disease, hypertension, obesity, and metabolic syndrome. It is thought that the oral microorganisms, their components, or metabolites as well as inflammatory mediators get into the systemic circulation and therefore to other parts of the human body [3,4,5,6,7,8]. Obesity is defined as abnormal or excessive fat accumulation that presents a risk to health [9]. Because adipose tissue is not only an energy reservoir but also a metabolic organ, dysregulation of cytokines, hormones, and metabolites occurs when this tissue increases [10]. There is evidence that obese individuals have systemically high levels of CRP, TNF-α, and IL-6 in comparison to normal-weight subjects and, therefore, are in a chronic subclinical inflammatory state [11]. A lot of possible pathomechanisms have been suggested to be responsible for the link between periodontitis and obesity, such as adipokines [12]. Adipokines are cytokines produced by adipocytes, but also by other cell types, such as periodontal cells [13,14,15,16,17,18]. Various adipokines such as leptin, visfatin, adiponectin, and resistin have been identified and studied in regard to systemic diseases. It is suggested that these adipokines have a wide range of functions, which include regulation of insulin metabolism, thirst and hunger sensation, angiogenesis, energy balance, bone metabolism, coagulation, and hematopoiesis, as well as inflammation and its resolution [13,19]. Adiponectin has mainly anti-inflammatory effects, whereas resistin, visfatin, and leptin are more pro-inflammatory [20,21]. Another adipokine, which has been rather less studied so far, is apelin. Apelin was first isolated and described in 1998 [22]. As early as 1993, the apelin receptor (angiotensin II protein J receptor, APJ) had been discovered in humans as a G protein-coupled receptor whose gene locus is located on chromosome 11 [23]. Apelin has a wide range of effects, which differ depending on cell types and tissues. Originally, apelin was isolated from tissues of the central nervous system. Accordingly, the molecule was found to be important in central signal transduction [24]. As research progressed, the apelin-APJ system was discovered in other tissues as well. For example, the molecule interferes with the regulation of bone turnover by modulating apoptosis, proliferation, and differentiation of osteoblasts [25,26]. It has been shown that apelin levels are increased in systemic diseases and conditions such as obesity and diabetes [27,28]. A recent study looked at serum levels of apelin in diabetes and/or periodontitis patients [29]. Those patients who suffered from both diabetes and periodontitis exhibited the highest serum levels of apelin as compared to healthy individuals. Another study could show that the salivary apelin levels of diabetic patients with periodontitis were increased as compared to healthy individuals [30]. This adipokine also has modulatory properties regarding inflammation. For example, apelin can increase the expression of TNF-α and IL-1β in glial cells, but at the same time downregulate inflammatory mediators in lung and heart cells [31,32]. Therefore, apelin could be a critical molecule, which may mediate the harmful effects of obesity on periodontal tissues. The aim of this in vitro study was to explore the regulatory effects of *Fusobacterium nucleatum* in the presence or absence of apelin on periodontal ligament (PDL) cells in order to test the hypothesis that apelin might be one of the pathomechanistic links between periodontal disease and obesity.
## 2.1. Regulation of COX2, CCL2, and MMP1 Expressions by F. nucleatum
First, we wanted to verify whether F. nucleatum would regulate the expression of COX2, CCL2, and MMP1 in PDL cells. F. nucleatum caused a significant ($p \leq 0.05$) and dose-dependent (O.D.660: 0.000, 0.025, 0.050, and 0.100) upregulation of the pro-inflammatory and proteolytic molecules COX2, CCL2, and MMP1 with the highest expression for the highest bacterial concentration (O.D.660 = 0.100) at 24 h (Figure 1a). In addition, the stimulatory effect of F. nucleatum (O.D.660 = 0.025) on these molecules was also time-dependent ($p \leq 0.05$), as shown in Figure 1b.
## 2.2. Modulatory Effects of Apelin on Pro-Inflammatory Actions by F. nucleatum
Next, we studied whether apelin (1 ng/mL) could modulate the stimulatory actions of F. nucleatum (O.D.660 = 0.025) on the expression of pro-inflammatory markers in PDL cells. Apelin was used at a concentration corresponding to physiological plasma levels and consistent with previous in vitro studies. For F. nucleatum, O.D.660 = 0.025 was chosen because even this minimal dose had a proinflammatory effect on PDL cells, as evidenced by a significant increase in the expression of COX2, CCL2, and MMP1. As shown by real-time PCR analysis, apelin significantly ($p \leq 0.05$) increased the F. nucleatum-stimulated expression of CCL2 at 24 h (Figure 2a). For COX2, CXCL-8, and TNF-α, no significant modulatory effect of apelin on the F. nucleatum-triggered expression was observed at this time point (Figure 2a). Moreover, apelin caused a further significant ($p \leq 0.05$) elevation of the F. nucleatum-induced expressions of COX2, CCL2, CXCL-8, and TNF-α at 48 h (Figure 2b). This shows that the stimulatory influence of apelin on the effects of F. nucleatum was stronger at 48 h as compared to 24 h.
## 2.3. Modulatory Effects of Apelin on Markers Involved in Soft and Hard Tissue Turnover
We then examined the effect of apelin (1 ng/mL) on the regulation of MMP1, TGF-β1, and RUNX2 by F. nucleatum (O.D.660 = 0.025) in PDL cells (Figure 3). F. nucleatum increased the expression of MMP1 at 24 h (Figure 3a) and 48 h (Figure 3b), and this upregulation was significantly ($p \leq 0.05$) enhanced by apelin at both time points. No upregulation by F. nucleatum was observed for TGF-β1 and RUNX2 at 24 h (Figure 3a) and 48 h (Figure 3b). Apelin had no significant effect on the actions of F. nucleatum on TGF-β1 at 24 h (Figure 3a) and 48 h (Figure 3b) and RUNX2 at 48 h (Figure 3b). Interestingly, apelin significantly ($p \leq 0.05$) counteracted the inhibitory effect of F. nucleatum on RUNX2 expression at 24 h (Figure 3a).
## 2.4. Involvement of Signaling Pathways in the Modulatory Effects of F. nucleatum and/or Apelin on CCL2 and MMP1 Expressions
We next sought to identify intracellular signaling pathways potentially involved in the actions of F. nucleatum on CCL2 and MMP1 in PDL cells. For this purpose, cells were pre-incubated with specific inhibitors for NF-κB or MEK$\frac{1}{2}$ signaling and subsequently stimulated with F. nucleatum (O.D.660 = 0.025) and/or apelin (1 ng/mL). Pre-incubation of cells with an NF-κB inhibitor resulted in a significant ($p \leq 0.05$) downregulation of the CCL2 expression in cells treated with either F. nucleatum alone or in combination with apelin at 24 h (Figure 4a). In contrast, the expressions of CCL2 and MMP1 induced by F. nucleatum and/or apelin were always significantly ($p \leq 0.05$) inhibited by the MEK$\frac{1}{2}$ inhibitor after 24 h (Figure 4a,b).
## 2.5. Effects of F. nucleatum on Apelin and Its Receptor
We also investigated whether apelin is expressed in PDL cells and, if so, whether this adipokine as well as its receptor are regulated by F. nucleatum (O.D.660 = 0.025). The periodontopathogen downregulated ($p \leq 0.05$) the expression of apelin and APJ over a variety of doses (Figure 5a). A slight time dependence was observed (Figure 5b).
## 2.6. Modulatory Effects of Apelin on CCL2 and MMP1 Protein Induced by F. nucleatum
Finally, we investigated whether apelin (1 ng/mL) can modulate the stimulatory effect of F. nucleatum (O.D.660 = 0.025) on pro-inflammatory markers also at protein level in PDL cells. As detected by ELISA, F. nucleatum resulted in increased protein levels of CCL2 and MMP1 in cell supernatants at 24 h and 48 h (Figure 6a,b). Incubation of F. nucleatum-stimulated cells with apelin resulted in a further significant ($p \leq 0.05$) increase in protein levels of CCL2 at 48 h (Figure 6a) and of MMP1 at 24 h and 48 h (Figure 6b).
## 3. Discussion
This study aimed to investigate the modulatory effect of the adipokine apelin on the action of the periodontopathogen F. nucleatum on PDL cells to better understand the relationship between periodontitis and obesity. Interestingly, apelin was able to modify bacterial regulation of molecules related to inflammation and hard and soft tissue turnover. The combination of F. nucleatum and apelin resulted in the highest expression levels of pro-inflammatory and proteolytic molecules, suggesting that apelin may be a pathomechanistic link mediating deleterious effects of obesity on periodontal tissues. In addition, F. nucleatum caused downregulation of the expression of apelin and its receptor, suggesting a role of these molecules in the pathogenesis of periodontitis.
There is strong evidence for an association between periodontitis and obesity [12,33]. It has been shown in several studies of our research group that adipokines represent a possible pathomechanistic link underlying the association between periodontitis and obesity [15,16,17,18,34,35,36,37]. Leptin, visfatin, and resistin exert pro-inflammatory effects on periodontal cells and tissues, whereas adiponectin has rather protective effects on periodontal cells [33]. However, with respect to the periodontium, almost nothing is known about the production, regulation, and action of apelin, another adipokine whose serum levels are altered in obesity [28]. Recently, Hirani et al. investigated the serum level of apelin in periodontally and systemically healthy individuals and periodontitis patients with and without type 2 diabetes [29]. The study showed that apelin levels were higher in the periodontitis group compared with the healthy control. When patients had concomitant periodontitis and obesity, apelin levels were highest. The authors concluded that the increased expression of apelin in patients with periodontitis and type 2 diabetes might indicate a possible role of this adipokine in inflammation and glucose regulation. Sarhat et al. examined the salivary apelin levels of periodontally diseased diabetic patients and of periodontally and systemically healthy individuals [30]. They also found the highest apelin levels in periodontitis patients with diabetes. In our study, the periodontopathogen F. nucleatum led to a dose- and time-dependent upregulation of pro-inflammatory and proteolytic molecules. Interestingly, apelin caused an increase in the F. nucleatum-stimulated expression of these pro-inflammatory and proteolytic molecules. In this respect, our in vitro data confirm that apelin may be associated with inflammation. Lee et al. also investigated the relationship between apelin and periodontitis and found a decrease in apelin expression in gingival tissues from periodontitis patients, which is in contrast to the aforementioned studies [38]. Moreover, overexpression of apelin or treatment with exogenous apelin suppressed TNF-α-stimulated gene expressions of MMP1, IL-6, and COX2 in PDL cells [38]. Further studies are needed to clarify whether apelin levels in gingiva, sulcus fluid, saliva, and serum are increased or decreased in gingivitis and periodontitis, and whether apelin exerts pro- or anti-inflammatory effects. In addition, it should be investigated whether periodontal therapy results in a change in these apelin levels.
Furthermore, we were interested in whether apelin and its receptor are expressed in periodontal cells, and if so, whether this expression can be regulated by F. nucleatum. Our in vitro experiments with PDL cells showed that both apelin and its receptor are constitutively produced in these cells. Moreover, our experiments revealed that the periodontopathogen F. nucleatum inhibited the expression of apelin and its receptor. In the study by Lee et al., incubation of PDL cells and gingival fibroblasts with the inflammatory mediator TNF-α also resulted in downregulation of apelin [38]. Therefore, this and our study suggest that the apelin-APJ system is downregulated during periodontal infection and inflammation, at least initially. Because our results suggest that apelin exerts rather pro-inflammatory effects, the initial downregulation of apelin and its receptor may represent the host tissues’ attempt to limit inflammation and associated tissue destruction. However, our experiments also showed that this possibly tissue-protective downregulation of apelin and its receptor was no longer observed after 48 h, which may suggest that in persistent periodontal infection, the apelin-APJ system may be of critical importance in the pathogenesis of periodontitis.
F. nucleatum is an obligate anaerobic gram-negative bacterium very prevalent in the subgingival biofilm and associated with the etiopathogenesis of periodontitis [39,40]. Infection with F. nucleatum alone has shown to cause alveolar bone loss in a murine experimental periodontitis [41]. When in combination with T. forsythia or P. gingivalis, F. nucleatum synergistically stimulated the host immune response and induced alveolar bone loss in this experimental periodontitis model [42,43]. F. nucleatum, such as other red complex bacteria, is associated with periodontitis [44]. As expected according to our previous studies [45,46,47], F. nucleatum led to increased expressions of pro-inflammatory and proteolytic molecules, underlining the special role of this bacterium in periodontal inflammation and destruction. As in our previous studies, F. nucleatum was used as lysate, so several factors may have been responsible for the observed stimulatory effects of F. nucleatum. Studies using live F. nucleatum or even biofilms consisting of a variety of different bacteria should be performed in the future to confirm the results of this study.
Our study clearly demonstrates that apelin can exert pro-inflammatory effects and thus enhance periodontal inflammatory processes. Although there are numerous publications on anti-inflammatory and thus protective effects of apelin [48,49], there are also studies that have demonstrated pro-inflammatory effects of apelin [32,50].
Our analyses regarding intracellular signal transductions suggest that pro-inflammatory effects of F. nucleatum and/or apelin are realized at least partially through MAPK and NF-kB. Further studies should clarify which other intracellular signaling pathways apelin uses for its modulatory effects. Our results are thus in agreement with other studies that have also shown that apelin uses the MAPK and NF-kB signaling pathways, among others, for its effects [27,48,51,52].
In the present study, apelin and APJ were also shown to be produced in periodontal cells and regulated by periodontal pathogenic bacteria, suggesting that apelin and APJ may play an important role in the pathogenesis of periodontitis. Interestingly, F. nucleatum led to downregulation of apelin and its receptor in PDL cells, which would imply an anti-inflammatory effect in accordance with the other results of this study. However, because the inhibitory effect of F. nucleatum was lost with increasing duration of bacterial incubation, this protective effect might also be lacking in persistent periodontal infection. Future studies should also address the apelin-APJ system in other cells of the periodontium, e.g., gingival epithelial cells, and fibroblasts.
The increased production of apelin by periodontal cells after bacterial stimulation suggests that this adipokine is increased in saliva, sulcus fluid, gingiva, and serum during periodontal inflammation. Clinical studies of experimental gingivitis and periodontitis as well as periodontal therapy, i.e., intervention, should further clarify the role of apelin locally in the periodontium but also systemically for the whole organism.
In summary, within its limitations, our in vitro study demonstrated that the adipokine apelin is able to modulate the effects of F. nucleatum on molecules associated with inflammation and hard and soft tissue turnover. Apelin was able to further increase the expression of pro-inflammatory and proteolytic molecules induced by F. nucleatum, which may suggest that apelin may be a pathomechanistic link mediating the deleterious effects of obesity on periodontal tissues. In addition, our study revealed that PDL cells express apelin and APJ and that these expressions are inhibited by F. nucleatum, suggesting a possible role for this adipokine and its receptor in the pathogenesis of periodontitis.
## 4.1. Cell Culture
A human PDL cell line PDL26 was used for cell culture. As described previously, this cell line was obtained from a third molar tooth of a healthy, 26-year-old non-smoking patient [47]. Cells were first cultured in cell culture flasks provided with nutrient medium. The culture medium was Dulbecco’s Modified Eagle Medium (DMEM) GlutaMAX (Invitrogen, Karlsruhe, Germany) supplemented with $10\%$ fetal bovine serum (FBS, Invitrogen). Furthermore, 100 units of penicillin and 100 μg/mL streptomycin (Invitrogen) were added to the medium. Cells were maintained in the incubator at 37 °C and with a humidified atmosphere of $5\%$ CO2. Cells were cultured (1 × 105 cells/well) on 6-well culture plates and grown until 70–$80\%$ confluence. The medium was changed every other day and 24 h before stimulation; the FBS concentration was reduced to $1\%$. The periodontopathogenic bacterium F. nucleatum ATCC 25586 was used at different concentrations (optical density, O.D.660 = 0.025, 0.050, and 0.100) to simulate microbial infection in vitro. The bacterial strain was pre-cultivated on Schaedler agar plates (Oxoid, Basingstoke, UK) in an anaerobic atmosphere for 48 h. Successively, bacteria were suspended in phosphate-buffered saline (O.D.660 = 1, corresponding to 1.2 × 109 bacterial cells/mL) and submitted twice to ultrasonication (160 W for 15 min) leading to total killing. Furthermore, apelin (recombinant human apelin protein, Abcam, Cambridge, United Kingdom) was used for in vitro stimulation at a concentration corresponding to physiological plasma levels (1 ng/mL) and consistent with previous in vitro studies [53,54,55]. In addition, cells were pre-incubated with PDTC (10 µM, Cell Signaling Technology, Danvers, MA, USA), a specific inhibitor of NF-κB, and U0126 (10 µM, Calbiochem, San Diego, CA, USA), a specific inhibitor of MEK$\frac{1}{2}$ signaling. Untreated cells served as control.
## 4.2. Real-Time PCR
RNA isolation was performed using RNeasy Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer’s instructions. To determine the RNA concentration, the spectrophotometer NanoDrop ND-2000 (Thermo Fischer Scientific, Waltham, MA, USA) was used. Five hundred ng of total RNA was reverse transcribed using iScrip Select cDNA Synthesis Kit (Bio-Rad Laboratories, Munich, Germany) according to manufacturer’s protocol. Gene expression analysis of apelin and its receptor (APJ), C-C motif chemokine ligand 2 (CCL2), cyclooxygenase-2 (COX-2), C-X-C motif chemokine ligand 8 (CXCL8), glyceraldehyde-3-phosphate dehydrogenase (GAPDH), matrix metalloproteinase 1 (MMP1), runt-related transcription factor 2 (RUNX2), transforming growth factor-beta 1 (TGF-β1), and tumor necrosis factor alpha (TNF-α), was performed by real-time PCR using the PCR thermal cycler CFX96 (Bio-Rad Laboratories), SYBR green PCR master mix (QuantiFast SYBR Green PCR Kit, Qiagen), and specific primers (QuantiTect Primer Assay, Qiagen). One µL of cDNA was mixed with 12.5 µL master mix, 2.5 µL primer, and 9 µL nuclease-free water. The mix was heated at 95 °C for 5 min, followed by 40 cycles of denaturation at 95 °C for 10 s, and a combined annealing/extension step at 60 °C for 30 s. Data were analyzed by comparative threshold cycle method.
## 4.3. ELISA
The protein levels of CCL2 and MMP1 in the cell supernatants were measured using commercially available ELISA kits (DuoSet, R&D Systems, Minneapolis, MN, USA) according to the manufacturer’s instructions. The optical density was determined using a microplate reader (BioTek Instruments, Winooski, VT, USA) set to 450 nm. The readings at 450 nm were subtracted from the readings at 540 nm for optical correction as per manufacturer’s recommendation. Cell numbers were checked and there was no significant difference between groups.
## 4.4. Statistical Analysis
The statistical analysis was performed using the software GraphPad Prism (version 9.2.0, GraphPad Software, San Diego, CA, USA). For data analysis, mean values and standard errors of the mean (SEM) were calculated. Data were checked for normal distribution and, subsequently, analyzed with the t-test (parametric) or Mann–Whitney-U test (non-parametric). For multiple comparisons, ANOVA or the Kruskall–Wallis test was applied, depending on normal distribution. The Dunnet’s (parametric) or Dunn’s test (non-parametric) served as post hoc tests. The significance level was set at $p \leq 0.05$ for all experiments. |
# Fibrinogen; a predictor of injury severity and mortality among patients with traumatic brain injury in Sub-Saharan Africa: a prospective study
## Abstract
### Introduction
Fibrinogen levels drop quicker than any other factors in severe trauma such as Traumatic Brain Injury (TBI). Contemporaneous studies show that fibrinogen concentrations < 2 g/L are strongly related to mortality. However, little is known regarding fibrinogen levels and TBI severity as well as mortality in sub-Saharan Africa. We therefore set out to determine whether fibrinogen levels are associated with TBI severity and seven days outcomes.
### Objectives
To determine the sensitivity and specificity of fibrinogen levels and the association with severity and mortality among TBI patients at Mulago Hospital.
### Methods
We prospectively enrolled 213 patients with TBI aged between 13 and 60 years of age and presenting within 24hrs of injury. Patients with pre-existing coagulopathy, concurrent use of anticoagulant or antiplatelet agents, pre-existing hepatic insufficiency, diabetes mellitus and who were pregnant were excluded. Fibrinogen levels were determined using the Clauss fibrinogen assay.
### Results
Majority of the patients were male ($88.7\%$) and nearly half were aged 30 or less ($48.8\%$). Fibrinogen levels less than 2g/L were observed in 74 ($35.1\%$) of the patients while levels above 4.5 g/L were observed in 30($14.2\%$) of the patients. The average time spent in the study was 3.7 ± 2.4 days. The sensitivity and specificity using fibrinogen < 2g/L was $56.5\%$ and $72.9\%$ respectively. Fibrinogen levels predict TBI severity with an AUC = 0.656 ($95\%$ CI 0.58–0.73: $$p \leq 0.000$$) Fibrinogen levels < 2g/L (hypofibrinogenemia) were independently associated with severe TBI. ( AOR 2.87 CI,1.34–6.14: $$p \leq 0.007$$). Levels above 4.5g/L were also independently associated with injury severity (AOR 2.89, CI 1.12–7.48: $p \leq 0.05$) Fibrinogen levels more than 4.5g/L were independently associated with mortality (OR 4.5, CI;1.47–13.61, $p \leq 0.05$).
### Conclusions
The fibrinogen level is a useful tool in predicting severity including mortality of TBI in our settings. We recommend the routine use of fibrinogen levels in TBI patient evaluations as levels below 2g/L and levels above 4.5g/L are associated with severe injuries and mortality
## Introduction
Trauma accounts for $11\%$ of the world’s disability adjusted life years (DALYs) with $90\%$ of these occurring in Low and Middle Income Countries.[1] Traumatic Brain Injury (TBI) per se is a major cause of disability globally with an incidence rate of 200 per 100 000 people per year[2] In Uganda, head injuries with TBI are the commonest type of injuries accounting for $44\%$ of trauma admissions at hospitals in Kampala[3] and mortality rate of $\frac{220}{100}$,000.[4] The morbidity and mortality due to TBI is higher in low-income and middle-income countries[5, 6] despite advancements in the clinical evaluation of patients with TBI using standardized protocols such as the Advanced Trauma Life Support (ATLS) protocols.[7, 8] Evidence from prior studies shows that deaths from trauma can be prevented if adequate and timely identification of the problem is done and the appropriate line of management is decided early.[9] In low and middle income settings where there is limited access to prompt investigation modalities for TBI victims, clinicians often find themselves relying on trauma algorithms, trauma assessment tools and clinical examination findings to diagnose and direct TBI management. Some of the trauma assessment tools employed in the evaluation of TBI patients include; Abbreviated Injury Score(AIS), Trauma Injury Severity Score (TRISS) and the Glasgow Coma Scale (GCS) specifically for TBI.[10] These assessment tools have been found to have considerable limitations and that they may not correlate well with severity of injury.[11] Among these, the GCS remains the commonest tool used to assess TBI severity in Sub-Saharan Africa.[12, 13] Despite its wide utility, the GCS does not provide specific parametric clinical information about the pathophysiologic abnormalities in TBI which are the targets of our interventions.[14] One example of a pathophysiologic event is intracranial bleeding which is also associated with poor clinical outcomes such as mortality and disability.[15] Fibrinogen, which is a positive acute phase protein[16] as well as a haemostatic protein [17] has been retrospectively studied as a prognostic indicator among TBI patients as well as a predictor of in hospital mortality(18–20). Following trauma, fibrinogen levels deteriorate more frequently and earlier than other routine coagulation parameters.[21, 22] *Additionally hypofibrinogenemia* has been described as a common occurrence in TBI possibly due to trauma induced coagulopathy.[19, 21, 23] A recent study showed that fibrinogen concentrations less than 2g. L were associated with poor outcomes including mortality in contrast to concentrations above 2.5g/L that are associated with favourable outcomes.[18, 19] Current guidelines also emphasize that fibrinogen concentrations be maintained over 1.5–2.0 g/L in severe trauma patients[22] Fibrinogen, therefore has a pivotal role in TBI; however, little is known regarding its sensitivity and specificity in diagnosis of severe TBI in sub-Saharan Africa.
We therefore set out to study the predictive ability of fibrinogen levels in determining TBI severity and predicting clinical outcomes in TBI as a step in improving prompt diagnosis and management of TBI victims. The aims of the study were to determine the sensitivity and specificity of low fibrinogen levels in predicting severity of traumatic brain injuries, to describe the association of fibrinogen levels with TBI severity and 7-day outcomes among TBI patients at Mulago Hospital. We hypothesized that plasma fibrinogen levels are associated with severity and short-term clinical outcomes in TBI patients.
## Study design and setting.
We prospectively studied 213 randomly selected TBI patients admitted to the Casualty unit at Mulago National Referral Hospital (MNRH) between December 2021 and May 2022. MNRH is the biggest public hospital in Uganda at approximately 5 kilometres from the city centre and it receives $75\%$ of injured victims in Kampala.[24] The Casualty unit of the hospital is the entry point for all trauma cases presenting to the hospital.
## Study population and sampling.
The inclusion criteria were as follows: patients aged 13 to 60 years with a clinical diagnosis of TBI documented using Computed Tomography (CT) or Glasgow Coma Scale (GCS) score by clinician and admitted within 24Hrs of TBI occurrence. The age range of 13 to 60 years was used in consideration of the altered metabolism of fibrinogen that occurs at the young and elderly extremes of age.(25–27) TBI in this study was defined as any alteration of brain function or presence of other evidence of brain pathology based on the GCS or head CT scan in a patient, caused by an external force such as accidents, assault, falls and burns.[28] Patients on concurrent use of anticoagulant or antiplatelet agents, medical diagnosis of liver disease, hypertension, and Diabetes mellitus, patients admitted after 24hrs of the injury occurrence and pregnant women were excluded.
To achieve our first objectives, we used the proportion of patients with low fibrinogen from a prior study [19] and level of precision of $7\%$ at $95\%$ confidence interval to determine a sample size of 186 patients using the Kish and Leslie formula.[29] To study the relationship of fibrinogen with outcomes, we calculated the sample size using formula for cohort studies based on comparison of two proportions representing the event rates in both the exposed and the non-exposed groups.[30] Using proportions from the study by Lv et al.[31], with $95\%$ CI and power of $80\%$, we determined a sample size of 140 patients for the cohort. Adjusting upwards for losses to follow up, we estimated the sample size to be 200 patients. We therefore enrolled a total of 213 patients using systematic random sampling to answer our objectives. All patients were evaluated and treated according to the local protocol. Informed consent was obtained from the patients included in the study and for the unconscious patients, waiver of informed consent was obtained from the Research Ethics Committee of Makerere University and Mulago Hospital Ethics committee.
## Study procedure and data Collection.
Data obtained included demographic information such as age, sex, occupation, time of injury, level of education, mechanism of injury and type of head injury. Clinical data including, blood pressure, pulse oximetry, temperature, pupillary reaction, CT scan results, GCS score and fibrinogen levels taken at time of admission were also obtained. The severity of injury was determined by the GCS score obtained by the neurosurgical team. The GCS score of ≤ 8 was categorized as severe TBI and scores 9–15 as non-severe TBI. ( Fig. 1) Fibrinogen levels were measured by the Clauss fibrinogen assay using the “Yumizen G FIB 5” reagent. The test was carried out on fresh decalcified venous blood obtained from the participants. On admission, 5 milliliters of venous blood were drawn from each of the participants into $3.2\%$ sodium citrate vacutainers and transported to the laboratory within 60 minutes of collection for analysis. Samples were centrifuged to obtain plasma that was prepared for analysis as a 1:10 dilution with Yumizen G IMIDAZOL buffer. The prepared sample was then analysed using an automated analyser and results recorded in g/L. We obtained levels < 1g/L as well as those above 5g/L that were retested at 1:5 dilution and 1:20 dilution respectively to obtain final results. The fibrinogen levels were categorized as: normal fibrinogen levels between 2 and 4.5g/L.[32] A fibrinogen level of < 2 g/L was considered as being low and a level of > 4.5 g/L as high according to standard laboratory reference values. The patients were followed up daily for 7 days and outcomes documented. The clinical outcome studied was in hospital mortality within 7 days of admission. The 11 patients lost to follow up were not analysed for outcomes. ( Fig. 1) They were however included in the analysesis for the association between fibrinogen and injury severity on admission to the hospital.
## Statistical analysis.
All study data collected was entered in Epidata version 4.6 software, cleaned and exported to STATA version 14 for analysis. Continuous variables were summarised as means with standard deviation. Categorical variables are expressed as percentages. Bivariate analyses of categorical variables were performed using Pearson’s chi test and presented as p values. A Receiver Operating Characteristic (ROC) curve is used to describe the predictive ability of fibrinogen levels in TBI. The sensitivity ± positive predictive value and specificity ± negative predictive value of fibrinogen levels were calculated using a 2 2 table. Binary logistic regression models were used to describe the relationship between categorical variables between the patient groups with GCS ≤ 8 and that with GCS ≥ 9. Following bivariate analyses, logistic regression multivariate models were used to evaluate the association between fibrinogen and TBI severity as well as in-hospital mortality. All patients with missing data were excluded from the analyses. The models were tested for multiple collinearities, goodness of fit and all independent variables with correlation coefficient above ± 0.4 were excluded from the logistic regression model. The relationship is presented as odds ratio with $95\%$ confidence intervals. All statistical analyses were performed using Stata statistical software, StataCorp. 2015. Stata Statistical Software: Release 14. College Station, TX: StataCorp LP For all analysis, the level of statistical significance was considered when $p \leq 0.05.$
## Results
A total of 213 TBI patients were included in the analysis. Road Traffic Crashes (RTCs) were responsible for TBI in $72.3\%$ ($$n = 154$$) of the patients followed by assault ($$n = 45$$, $21.13\%$). 101 ($47.42\%$) of the participants had severe TBI while 112($52.58\%$) had non severe TBI. The majority of the participants were male ($$n = 189$$, $88.73\%$, M: $F = 189$:24) and aged 30 or less ($$n = 104$$, $48.83\%$). Most of the participants were casual labourers ($$n = 65$$, 30.52) and educated to primary school level ($$n = 99$$, $46.48\%$). The average age of the study population was 32.42 ±11.98 years with the minimum age of 13 and maximum age of 60. The majority of patients had closed head injuries ($$n = 115$$, $53.99\%$). The peak time of injury among the patients was during the evening hours between 1700hrs and 2300hrs with $51.5\%$ ($$n = 103$$) of injuries during this time of the day. The average time spent in the study by the participants was 3.7 ± 2.4 days. ( Table 1) The average length of time spent in hospital before discharge was 4 (± 1.79) days.
The majority 66($31\%$) of the patients were discharged within 4 days of stay while 21 ($10\%$) of the participants were discharged after 4 days. The average time spent in study before death occurred was 2.1 (± 1.98) days. In the majority (107, $50.71\%$) of the patients, the fibrinogen levels were between 2–4.5g/L. The maximum level observed was 7.81g/L. The minimum values were as low as < 1g/L. Levels < 2g/L were observed in 74 ($35.07\%$) of the patients. Levels of fibrinogen > 4.5 g/L were observed in 30($14.22\%$) of the patients. The 7-day mortality rate was $34.3\%$. Forty-seven patients (47,$64.38\%$) died at Casualty within 24hrs of admission, 14 ($19.18\%$) died in the Neurosurgery unit and 12($16.44\%$) in the ICU.
## Discussion
This study set out to determine the specificity and sensitivity of fibrinogen levels and the association with severity and mortality among TBI patients at Mulago Hospital. There are no studies addressing this topic from sub-Saharan Africa and this is the first study in Uganda to describe this relationship. Despite numerous evidence available regarding the predictability of fibrinogen in the prognosis of TBI outcomes from prior studies[19, 33], little is known concerning its predictive ability in the diagnosis of severe TBI.
The sensitivity using fibrinogen < 2g/L(hypofibrinogenemia) was $56.5\%$ with a positive predictive value of $64.9\%$. The specificity was $72.9\%$ with a negative predictive value of $61.7\%$. In addition, hypofibrinogenemia (fibrinogen levels < 2g/L) was common in TBI patients occurring in $35.07\%$ of TBI patients on admission. This is similar to $38.6\%$ found in a previous study done by Lv, et al.[19] Our study also found that $20.4\%$ of patients with TBI had high levels of fibrinogen(> 4.5g/L).
Recent research showed that for patients admitted with severe TBI, fibrinogen levels < 2g/L on admission are strongly related to increased mortality.[19] In addition, studies have demonstrated that in severe trauma, fibrinogen is reduced to critical levels. [ 21, 34] The debate about the critical value of fibrinogen in trauma is an ongoing matter of contention.[35] Floccard et’al defined critical levels as being ≤ 1.0g/L and abnormal levels being 1.0–1.8 g/L all of which have been reported in patients with severe Trauma. [ 36]. By contrast, high fibrinogen levels have been described as being protective in patients with multiple trauma.[35] Furthermore, previous studies have demonstrated that TBI is associated with abnormalities in clot formation due to differences in fibrinogen levels among victims.[37] Fibrinogen could therefore be used as a marker or predictor of TBI severity.
The study found that low fibrinogen levels (< 2 g/L) were fairly predictive of TBI severity with an AUC = 0.656, sensitivity of $56.5\%$ and specificity of $72.9\%$. Therefore, absence of hypofibrinogenemia in TBI patients found in this study was associated with milder forms of TBI. This is consistent with what previous studies have described in severe trauma.
This study shows that the likelihood of having a severe form of TBI increases with low levels of fibrinogen(< 2g/L). Possible explanations for the above relationship stem from the presence of intracranial bleeding which is a common occurrence in TBI as noted in the CRASH trial.[38] Trauma induced coagulopathy is a crucial element in severe TBI especially when compounded with intracranial bleeding. The consumption of clotting factors and platelets in response to intracranial bleeding further lowers fibrinogen levels.[39] TBI is also associated with systemic hyperfibrinolysis which occurs in as much as $20\%$ of critical trauma patients hence lowering fibrinogen levels further. [ 40, 41] In addition, prior studies have shown that fibrinogen levels drop drastically and most rapidly during haemorrhage.[34] Therefore, the association of low fibrinogen levels with severe TBI is possibly due to a combination of haemorrhage and trauma induced coagulopathy that occurs in severe TBI.[42] Much as there is a stronger association of severity with low fibrinogen levels, high fibrinogen levels (> 4.5g/L) were also associated with severe TBI. This could be due to the inflammation that accompanies major trauma and disruption of the blood brain barrier with release of procoagulant molecules.[37, 43] A study done by Samuels et al found that TBI patients commonly presented with a spectrum ranging from hypocoagulability to hypercoagulability.[37] *It is* therefore likely that severe TBI without intracranial haemorrhage leads to high fibrinogen levels while severe TBI with haemorrhage lowers the fibrinogen levels.
Unlike prior research findings, [19] this study showed that fibrinogen levels > 4.5g/L were strong predictors of mortality. A possible explanation of this observation starts with trauma induced disruption of the Blood Brain Barrier that incites an extensive inflammatory response.[16, 44] Such extensive inflammation can be caused by neural cell death with resultant secondary brain oedema.[43] Diffuse Axonal *Injury is* of utmost importance here since it is characterized by an intense inflammatory response. [ 45] *It is* this inflammatory process that is responsible for the elevated fibrinogen levels. Organ dysfunction then ensues from this trauma induced systemic inflammatory state.[46] The development of organ dysfunction is related to the intensity of the trauma induced inflammatory response.[47] Hence, a severe systemic inflammatory response due to a disrupted blood brain barrier causes early organ dysfunction and later multiple organ failure which leads to death.[47, 48]. This possibly explains the high fibrinogen levels found to be associated with mortality.
While one of the strengths of our study is that it was carried out prospectively in a high-volume trauma center, some limitations need to be acknowledged. This was a single centre study with a rather small sample size and with limited duration allocated to conduct the study due to specified time frames for research activities. Secondly, fibrinogen is not a routine test in Mulago Hospital for TBI patients and replacement therapy is not currently part of the management protocols in patients with TBI; hence, despite the identification of abnormalities among the participants, correction therapy with concentrate was not possible for the participants. Patients with coagulopathies however received other supplements such as tranexamic acid and Fresh frozen plasma. Additional prospective studies with larger sample size and longer study duration are needed to confirm the predictability of TBI severity and clinical outcomes using fibrinogen levels.
## Conclusions
In conclusion, we established that fibrinogen is a useful tool in predicting severity of TBI and mortality. The study reveals that the sensitivity of fibrinogen levels < 2g/L is $56.5\%$ and the specificity is $72.9\%$. Fibrinogen fairly predicts TBI severity with an AUC of 0.656.
Fibrinogen levels may be used as an additional tool to screen TBI patients for injury severity. Low fibrinogen levels (< 2g/L) are predictors of TBI severity. High fibrinogen levels > 4.5g/L are also predictors of TBI severity. A fibrinogen level of > 4.5g/L is a strong predictor of mortality in TBI patients. Integrating fibrinogen as a biomarker in TBI management could therefore provide critical information about trauma physiology and ultimately influence clinical decisions. Additional larger prospective studies are needed to confirm these findings.
## Funding
This research was supported by the National Institute of Neurological Disorders and Stroke and Stroke of the under-award Number D43NS118560. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Health.
## Availability of data and materials
The datasets generated and/or analyzed during the current study are not publicly available due to confidentiality agreements but are available from the corresponding author on reasonable request. |
# Brain Ventricle and Choroid Plexus Morphology as Predictor of Treatment Response: Findings from the EMBARC Study
## Abstract
Recent observations suggest a role of the choroid plexus (CP) and cerebral ventricle volume (CV), to identify treatment resistance of major depressive disorder (MDD). We tested the hypothesis that these markers are associated with clinical improvement in subjects from the EMBARC study, as implied by a recent pilot study. The EMBARC study characterized biological markers in a randomized placebo-controlled trial of sertraline vs. placebo in patients with MDD. Association of baseline volumes of CV, CP and of the corpus callosum (CC) with treatment response after 4 weeks treatment were evaluated. 171 subjects (61 male, 110 female) completed the 4 week assessments; gender, site and age were taken into account for this analyses. As previously reported, no treatment effect of sertraline was observed, but prognostic markers for clinical improvement were identified. Responders ($$n = 54$$) had significantly smaller volumes of the CP and lateral ventricles, whereas the volume of mid-anterior and mid-posterior CC was significantly larger compared to non-responders ($$n = 117$$). A positive correlation between CV volume and CP volume was observed, whereas a negative correlation between CV volume and both central-anterior and central-posterior parts of the CC emerged. In an exploratory way correlations between enlarged VV and CP volume on the one hand and signs of metabolic syndrome, in particular triglyceride plasma concentrations, were observed. A primary abnormality of CP function in MDD may be associated with increased ventricles, compression of white matter volume, which may affect treatment response speed or outcome. Metabolic markers may mediate this relationship.
## Introduction
The pathophysiology of major depressive disorder (MDD) is heterogeneous. The identification of effective compounds on the basis of a specific underlying neurobiology is hampered by the currently accepted definition of MDD in relevant classifications, including the DSM-5, which does not take biological differentiation into account. Importantly, this variability may not only affect the response to a given pharmacotherapy, but also the natural course of clinical change. This situation has negative implications in the context of clinical trials, in which treatment arms are compared, which may show neurobiological heterogeneity at baseline. To stratify a population on the basis of biological variables would confirm a biologically defined subtype, which is suitable for the treatment of a specific nature.
An argument, which is often brought up as a challenge is the operational complexity of such an approach. However, broadly available and easily accessible biological markers are available. Markers, which are available and have been shown to differentiate patients with depression include inflammatory markers1–5, metabolic markers, in particular those related to metabolic syndrome5, 6, and neuroendocrine1, 7, 8 characteristics. More recently, markers of autonomic regulation, including blood pressure and heart rate variability received renewed attention9–12.
Furthermore, imaging biomarkers have been characterized to differentiate the subjects with presumed different clinical response. Many of these, including volumetry of gray- or white matter segments are of high importance from a research perspective, but are difficult to assess in standard practice13, 14. A more easily accessible imaging marker, which is unfortunately frequently not reported in recent imaging studies, is cerebral ventricular volume (VV), partially by the argument that changes in ventricular volume are biologically unspecific, as many different brain areas may contribute to this phenomenon. Here we explore the alternative hypothesis that choroid plexus driven ventricular expansion results in the compression surrounding anatomical areas, making ventricular volume changes the potential primary factor. In the context of depression, VV is increased in patients with depression in comparison to healthy controls15–17 and may be related to treatment outcome18. We recently demonstrated an association between an increased choroid plexus and ventricular volume and worse treatment outcome in hospitalized patients with depression and identified moderators of this relationship19, i.e. body mass index (BMI) and the salivary aldosterone/cortisol ratio. The effect may be mediated by a compression of of corpus callosum segments, which will affect anatomical projection areas.
In this context it is important to consider that VV and the volume of the corpus callosum show short term structural plasticity. Both underly sleep-related changes20 and VV is sensitive to stress, at least in animals21. A plausible mediator of these phenomena is again the change in activity of the choroid plexus (CP). The volumetric determination of the CP is a relatively new area of investigation, but is feasible with current MRI techniques. Changes have been described in complex pain syndrome22, anorexia nervosa23, multiple sclerosis24 and most recently in major depression25 and psychosis26. Mechanistically, stress leads in an animal model to changes in gene expression of the CP of receptors, which have been linked to MDD, including 5-HT2a, 5-HT2c, glucocorticoid, TNFα, IL1β, BDNF27 as well as IL1 receptor28 and the CRH-receptor29. The choroid plexus may play a role in inducing inflammatory changes in depression and may be involved in sickness behavior2. Downstream mechanisms of the involvement of the CP are therefore at least twofold: an increased CSF release may lead to a mechanical compression of anatomical areas, which are adjacent to the ventricles30. Secondly, molecular moderators may spread into brain tissue via volume transmission31, 32. Those moderators may be produced by the CP itself or stem from the circulation.
We want to replicate our earlier findings of the relationship between clinical outcome of patients with depression on the one hand and ventricular volume, choroid plexus function and corpus callosum volume in a larger sample in this retrospective analysis from data from the EMBARC study. In an exploratory way we also correlate metabolic and autonomic markers with the volume of these anatomical areas in order to generate hypothesis of the causality of the observed relationships.
## Methods
The EMBARC study characterized biological markers in a randomized placebo-controlled trial of sertraline vs. placebo in patients with MDD for 8 weeks, followed by an additional treatment section, based on the outcome of the first 8 weeks of treatment. For consort statement see33. This trial is conducted according to the Declaration of Helsinki. It was approved by the Institutional Review Board at each clinical site. Signed informed consent was obtained from subjects in order to participate in the trial. The main objective was to identify clinical and biological moderators of treatment response34. Patients with early onset (before age 30), chronicity (episode duration > 2 years) or recurrent MDD (two or more recurrences including current episode) were enrolled.
The clinical parameter of interest was the Hamilton-depression rating scale (17 item; HAMD-17). For correlational analysis of clinical improvement we used the ratio between the HAMD-17 at outcome divided to the HAMD-17 at baseline (HAMD-17 ratio). A value of 1 means no change from baseline, a value of 0.7 means a reduction to $70\%$ of the baseline value. Response was defined as a HAMD-17 ratio ≤ 0.5.
For these primary analyses we focused on subjects, who completed the first 4 weeks of the placebo-controlled treatment phase. In the current dataset, 207 subjects had assessments with the Hamilton depression rating scale (HAMD) at baseline, of which 171 (; age 37.5 ± 13.4; HAMD-17: 18.8 ±4.7) had an assessment at week 4. We a priori chose the 4-week treatment interval in order to optimize the time for clinical improvement with the number of drop-outs. For a time course of the correlation of the HAMD-17 value with imaging parameters, which we generated as a sensitivity analysis and to show consistency, please see Table S1.
Imaging was processed as described before34, 35. Of the subjects, who completed 4 weeks of treatment, 171 also had imaging data at baseline. Association of volumes of CV, CP and of the corpus callosum (CC), with treatment response were evaluated. The relationship of the volume of choroid plexus, cerebral ventricular volumes and the corpus callosum with response after 4 weeks from baseline (≤50 % reduction of the HAMD) was assessed; gender, age, and total brain volume were taken into account for the primary MANCOVA analysis. For the analysis of correlations Pearson correlation coefficients and p-values are provided. The relationship between the volumes of the choid plexus- and ventricular volumes should be regarded as primary analysis, as this analysis serves to replicate our earlier findings. As the anatomical parameters of interest are considered to be highly correlated and therefore not independent correction for multiple testing was not performed. The correlations with metabolic and autonomic parameters have to be regarded as exploratory.
## Results
A correlation between baseline HAMD-17 and the volumes of interest was performed in order to determine potential state related effects. Choroid plexus volumes were significantly correlated with the HAMD-17 score at baseline ($$n = 217$$; right: Pearson R: 0.22, $$p \leq 0.002$$; left: Pearson $R = 0.17$, $$p \leq 0.017$$), whereas no correlation between ventricular volumes or corpus callosum sections and baseline depression severity could be detected (all $p \leq 0.20$ with the exception of the right lateral ventricle, which showed a trend toward a significant correlation (Pearson $R = 0.13$; $$p \leq 0.06$$).
Regarding the analysis of factors related to treatment outcome: No statistically significant treatment effect of sertraline was observed, as reported earlier36, but prognostic markers for therapy response were identified. Therefore, treatment was not a factor of the analyses. Comparing responders and non-responders, we adjusted for gender and age. An overall global significant difference between responders and non-responders was observed for volumetric parameters ($$p \leq 0.007$$, see table 2). Univariate analyses revealed that responders at week 4 had significantly smaller volumes of the choroid plexi and lateral ventricles, whereas the volume of mid-anterior and mid-posterior CC was significantly larger compared to non-responders (Table 2).
Vice versa, splitting the population at the median for the ventricular volume demonstrates the difference of the course of depressive symptoms between the two VV groups clearly (Fig. 1): A significant difference between HAMD-17 scores for the high vs. low VV-volume groups were observed at week 2 and week 4.
As a sensitivity analysis we also compared the anatomical structures split into responders vs. non-responders for each timepoint of the study, up to 8 weeks. Choroid plexus volumes at baseline differentiated these groups starting at week 4 up to week 8 ($p \leq 0.05$), however, other parameters did not reach statistical significance past week 4. Please see suppl. Table S1 for the stability of the correlation between volume of anatomical structures and treatment effect over time.
In addition to the comparisons between responders and non-responders correlations between baseline parameters and the HAMD-17 ratio were performed, which is independent of a chosen cut off. These correlational analyses confirmed the relationship between clinical change on one hand and ventricular volume, choroid plexus volume and CC segment volumes at baseline on the other hand (Tab. 3, Fig. 2). These data as well as the previous ones confirm the difference between responders and non-responders regarding anatomical structures and therefore the results from our pilot study. In addition, we explored other factors, which may affect the volume of the anatomical arias in an exploratory fashion. These analyses, are part of Tab.3 and need replication. We found that the volumes of both lateral ventricles were positively correlated to LDL-cholesterol and triglyceride levels. The volume of the left VV was significantly correlated and the right VV showed a trend towards a significant correlation to systolic blood pressure. A similar pattern was observed for the CP volumes. All these parameters are also positively correlated to age, which we corrected for in the primary analysis.
In order to determine the relationship between the anatomical areas of interest, ventricular volumes were correlated with CC segments and CP volumes. A significant positive correlation between CP volumes and lateral ventricle volumes was established. More importantly, a significant negative correlation between third ventricular volumes and the mid-anterior and mid-posterior CC segments, as well as a significant negative correlation between the lateral ventricles and the mid-posterior CC volume were observed (Table S2).
DTI parameters as assessed for the corpus callosum did not predict outcome. However, the volume of the mid-anterior and mid posterior CC segments, adjusted for total brain volume, correlated negatively with the axial diffusivity of these segments (mid-anterior: R = −0.42, $p \leq 0.001$, $$n = 191$$; mid-posterior: R=−0.15; $$p \leq 0.036$$, $$n = 196$$), whereas the CC-segment volumes were not associated with fractional anisotropy (for all, $p \leq 0.1$).
## Discussion
The primary outcome of this study is that an easily accessible imaging marker, i.e. lateral ventricular volumes, show a strong predictive value for the improvement of depressive symptoms in MDD patients treated with either sertraline or placebo. Mechanistically, this appears to be related to an alteration in choroid plexus function, both of which may affect corpus callosum integrity.
The strong relationship of ventricular volume to choroid plexus volume on one hand and the volume of CC segments on the other hand could be of theoretical interest for the pathophysiology of some forms of MDD. A working hypothesis could be that changes in choroid plexus function, i.e. an increased release of CSF volume19, 37, or an increased release of specific bioactive molecules, including inflammation mediators24, 31, may lead to a change in white matter volume and/or integrity. The increased ventricular volume or, alternatively, such bioactive molecules may affect white matter function either by mechanical compression or an effect on white matter integrity via alternations of oligodendrocyte function. This could be related to changes in myelination or changes in the volume regulation of axons within the CC. Disturbance of white matter integrity has indeed frequently been described in patients with depressive disorders, mainly by using diffusion tensor imaging (DTI) methods38–43. In support of the hypothesis of the choroid plexus involvement in this pathway: the activity of the choroid plexus is affected by neuroendocrine influences, which have been linked to MDD, in particular vasopressin and aldosterone19, 37, as well as metabolic markers related to an increased BMI19, 44. These findings are also in line with the role of inflammation as both aldosterone45–47 and high BMI48–51 show a close association to increased inflammation. Finally, inflammation has recently been associated with increased choroid plexus volume in patients with depression25 and multiple sclerosis24.
Our observation that the HDRS-17 score correlates significantly to choroid plexus volumes at baseline, only by trend to ventricular volumes and not to CC segment volumes implies that choroid plexus volume shows a state characteristic, and that ventricular volume shows somewhat lesser plasticity in relationship to mood and may have a more trait/chronicity related characteristic. Corpus callosum segments volume furthermore appear mainly to be trait- or risk markers. As mentioned in the introduction, stress leads to an increase in ventricular volume in animals21. Childhood abuse has been related to later life increase ventricular volumes and reduced white matter volume52, 53 and also the therapy refractoriness in depression54, 55. This could imply that early life stress affects ventricular and white matter structure via a prolonged choroid plexus activation. However, a recent analysis, based on the self-report depression scale QIDS-SR did not confirm a difference regarding subjects with and without childhood adversity regarding clinical response56.
Other factors, which determine the size of the ventricles, which are potentially mediated via choroid plexus alterations are related to metabolic disorders. In particular, high fat diet is related to an increased ventricular volume in animals in the context of traumatic stress57. The finding of the correlation between triglyceride levels and systolic blood pressure at baseline on one hand and choroid plexus volumes and ventricular volumes on the other hand reported here confirm the influence of metabolic parameters to differentiate patients with depression1. Interestingly, and a link between increased ventricular volume and metabolic dysfunction, in particular hyperlipidemia, in subjects with normal pressure hydrocephalus58 has also been observed. Similarly, in our pilot study we previously described a strong correlation between BMI and both choroid plexus- and ventricular volume19.
As mentioned, markers of inflammation and metabolic disturbances are preferentially present in subjects with atypical depression, in comparison both to healthy subjects and patients with melancholic depression1, 59, 60. This is in line with the current findings, as atypical depression appears to be less responsive to standard antidepressant treatment61. Of importance, patients with atypical depression show in general an earlier age of onset62. The current study only enrolled patients with an age of onset ≤ 30 years, which means that there is probably an enrichment of this subtype in comparison to the general population. Age of onset appears to be associated with specific neurobiological differences in depression63, which may be related to alterations in autonomic function and endocrine characteristics. Whether age of onset also differentiates brain morphology needs further confirmation.
Regarding the relationship of DTI parameters, no relationship with clinical change was observed. This is in contrast to studies, which reported DTI parameters as predictive for response, for example to ketamine64, 65. Nevertheless, we observed that the volume of CC segments correlated inversely with axial diffusivity (AD), i.e. a smaller CC segment volume was correlated to an increased AD. An earlier DTI report from the EMBARC study, which focused on the structural connectivity in specific anatomical areas did find an increase in fractional anisotropy (FA) in non-remitters35. As AD and FA are correlated, this outcome appears consistent, but is nevertheless in contrast to a number of earlier cited findings39, 40, 42, 43, 66. This shows the importance to take into consideration that FA and AD and other DTI markers can be influenced by varying mechanisms, which depend on one hand on the structural integrity of an axon, but also an axonal density67.
Limitations of the study are the post hoc nature of the current analyses, however, they were motivated by the attempt to replicate data from an earlier study19 and the primary variables of interest are identical. Therefore, with all caution, the current analysis overall confirms the previous pilot study. It has, however, to be considered that inclusion/exclusion criteria differ between the studies.
In conclusion, we (re-)identified an easily accessible imaging marker which appears to be related to the clinical course of depression. Ventricular volume may affect other imaging parameters and should therefore be taken into account in future imaging studies, at least in studies in MDD. In addition, the current findings go beyond a strictly descriptive association. With the additional observation of the relationship of increased ventricular volumes and increased choroid plexus volumes, our findings provide a plausible hypothesis, how neuroendocrine and metabolic parameters mechanistically influence depressive symptoms. A new focus on choroid plexus function in stress-related disorders appears to be supported.
## Conflict of interest:
HM: full time employee at Reviva Pharmaceuticals. He also is the owner of Murck-Neuroscience LLC, which develops a patent in the area of major depression.
MF: lifetime disclosures: Research Support: Abbott Laboratories; Acadia Pharmaceuticals; Alkermes, Inc.; American Cyanamid; Aspect Medical Systems; AstraZeneca; Avanir Pharmaceuticals; AXSOME Therapeutics; BioClinica, Inc; Biohaven; BioResearch; BrainCells Inc.; Bristol-Myers Squibb; CeNeRx BioPharma; Centrexion Therapeutics Corporation; Cephalon; Cerecor; Clarus Funds; Clexio Biosciences; Clintara, LLC; Covance; Covidien; Eli Lilly and Company;EnVivo Pharmaceuticals, Inc.; Euthymics Bioscience, Inc.; Forest Pharmaceuticals, Inc.; FORUM Pharmaceuticals; Ganeden Biotech, Inc.; Gentelon, LLC; GlaxoSmithKline; Harvard Clinical Research Institute; Hoffman-LaRoche; Icon Clinical Research; Indivior; i3 Innovus/Ingenix; Janssen R&D, LLC; Jed Foundation; Johnson & Johnson Pharmaceutical Research & Development; Lichtwer Pharma GmbH; Lorex Pharmaceuticals; Lundbeck Inc.; Marinus Pharmaceuticals; MedAvante; Methylation Sciences Inc; National Alliance for Research on Schizophrenia & Depression (NARSAD); National Center for Complementary and Alternative Medicine (NCCAM); National Coordinating Center for Integrated Medicine (NiiCM); National Institute of Drug Abuse (NIDA); National Institutes of Health; National Institute of Mental Health (NIMH); Neuralstem, Inc.; NeuroRx; Novartis AG; Novaremed; Organon Pharmaceuticals; Otsuka Pharmaceutical Development, Inc.; PamLab, LLC.; Pfizer Inc.; Pharmacia-Upjohn; Pharmaceutical Research Associates., Inc.; Pharmavite® LLC; PharmoRx Therapeutics; Photothera; Praxis Precision Medicines; Premiere Research International; Protagenic Therapeutics, Inc.; Reckitt Benckiser; Relmada Therapeutics Inc.; Roche Pharmaceuticals; RCT Logic, LLC (formerly Clinical Trials Solutions, LLC); Sanofi-Aventis US LLC; Shenox Pharmaceuticals, LLC; Shire; Solvay Pharmaceuticals, Inc.; Stanley Medical Research Institute (SMRI); Synthelabo; Taisho Pharmaceuticals; Takeda Pharmaceuticals; Tal Medical; VistaGen; WinSanTor, Inc.; Wyeth- Ayerst Laboratories; Advisory Board/Consultant: Abbott Laboratories; Acadia; Aditum Bio Management Company, LLC; Affectis Pharmaceuticals AG; Alfasigma USA, Inc.; Alkermes, Inc.; Altimate Health Corporation; Amarin Pharma Inc.; Amorsa Therapeutics, Inc.; Ancora Bio, Inc.; Angelini S.p. A; Aptinyx Inc.; Arbor Pharmaceuticals, LLC; Aspect Medical Systems; Astella Pharma Global Development, Inc.; AstraZeneca; Auspex Pharmaceuticals; Avanir Pharmaceuticals; AXSOME Therapeutics; Bayer AG; Best Practice Project Management, Inc.; Biogen; BioMarin Pharmaceuticals, Inc.; BioXcel Therapeutics; Biovail Corporation; Boehringer Ingelheim; Boston Pharmaceuticals; BrainCells Inc; Bristol-Myers Squibb; Cambridge Science Corporation; CeNeRx BioPharma; Cephalon, Inc.; Cerecor; Clexio Biosciences; Click Therapeutics, Inc; CNS Response, Inc.; Compellis Pharmaceuticals; Cybin Corporation; Cypress Pharmaceutical, Inc.; DiagnoSearch Life Sciences (P) Ltd.; Dainippon Sumitomo Pharma Co. Inc.; Dr. Katz, Inc.; Dov Pharmaceuticals, Inc.; Edgemont Pharmaceuticals, Inc.; Eisai Inc.; Eli Lilly and Company; ElMindA; EnVivo Pharmaceuticals, Inc.; Enzymotec LTD; ePharmaSolutions; EPIX Pharmaceuticals, Inc.; Esthismos Research, Inc.; Euthymics Bioscience, Inc.; Evecxia Therapeutics, Inc.; ExpertConnect, LLC; FAAH Research Inc.; Fabre-Kramer Pharmaceuticals, Inc.; Forest Pharmaceuticals, Inc.; Forum Pharmaceuticals; Gate Neurosciences, Inc.; GenetikaPlus Ltd.; GenOmind, LLC; GlaxoSmithKline; Grunenthal GmbH; Happify; H. Lundbeck A/S; Indivior; i3 Innovus/Ingenis; Intracellular; Janssen Pharmaceutica; Jazz Pharmaceuticals, Inc.; JDS Therapeutics, LLC; Johnson & Johnson Pharmaceutical Research & Development, LLC; Knoll Pharmaceuticals Corp.; Labopharm Inc.; Lorex Pharmaceuticals; Lundbeck Inc.; Marinus Pharmaceuticals; MedAvante, Inc.; Merck & Co., Inc.; Mind Medicine Inc.; MSI Methylation Sciences, Inc.; Naurex, Inc.; Navitor Pharmaceuticals, Inc.; Nestle Health Sciences; Neuralstem, Inc.; Neurocrine Biosciences, Inc.; Neuronetics, Inc.; NextWave Pharmaceuticals; Niraxx Light Therapeutics, Inc; Northwestern University; Novartis AG; Nutrition 21; Opiant Pharmecuticals; Orexigen Therapeutics, Inc.; Organon Pharmaceuticals; Osmotica; Otsuka Pharmaceuticals; Ovid Therapeutics, Inc.; Pamlab, LLC.; Perception Neuroscience; Pfizer Inc.; PharmaStar; PharmaTher Inc.; Pharmavite® LLC.; PharmoRx Therapeutics; Polaris Partners; Praxis Precision Medicines; Precision Human Biolaboratory; Prexa Pharmaceuticals, Inc.; Protagenic Therapeutics, Inc; PPD; PThera, LLC; Purdue Pharma; Puretech Ventures; Pure Tech LYT, Inc.; PsychoGenics; Psylin Neurosciences, Inc.; RCT Logic, LLC (formerly Clinical Trials Solutions, LLC); Relmada Therapeutics, Inc.; Rexahn Pharmaceuticals, Inc.; Ridge Diagnostics, Inc.; Roche; Sanofi-Aventis US LLC.; Sensorium Therapeutics; Sentier Therapeutics; Sepracor Inc.; Servier Laboratories; Schering-Plough Corporation; Shenox Pharmaceuticals, LLC; Solvay Pharmaceuticals, Inc.; Somaxon Pharmaceuticals, Inc.; Somerset Pharmaceuticals, Inc.; Sonde Health; Sunovion Pharmaceuticals; Supernus Pharmaceuticals, Inc.; Synthelabo; Taisho Pharmaceuticals; Takeda Pharmaceutical Company Limited; Tal Medical, Inc.; Tetragenex; Teva Pharmaceuticals; TransForm Pharmaceuticals, Inc.; Transcept Pharmaceuticals, Inc.; University of Michigan, Department of Psychiatry; Usona Institute, Inc.; Vanda Pharmaceuticals, Inc.; Versant Venture Management, LLC; VistaGen; Xenon Pharmaceuticals Inc.; Speaking/Publishing: Adamed, Co; Advanced Meeting Partners; American Psychiatric Association; American Society of Clinical Psychopharmacology; AstraZeneca; Belvoir Media Group; Boehringer Ingelheim GmbH; Bristol-Myers Squibb; Cephalon, Inc.; CME Institute/Physicians Postgraduate Press, Inc.; Eli Lilly and Company; Forest Pharmaceuticals, Inc.; GlaxoSmithKline; Global Medical Education, Inc.; Imedex, LLC; MGH Psychiatry Academy/Primedia; MGH Psychiatry Academy/Reed Elsevier; Novartis AG; Organon Pharmaceuticals; Pfizer Inc.; PharmaStar; United BioSource, Corp.; Wyeth-Ayerst Laboratories; Equity Holdings: Compellis; Neuromity; Psy Therapeutics; Sensorium Therapeutics; Royalty/patent, other income: Patents for Sequential Parallel Comparison Design (SPCD), licensed by MGH to Pharmaceutical Product Development, LLC (PPD) (US_7840419, US_7647235, US_7983936, US_8145504, US_8145505); and patent application for a combination of *Ketamine plus* Scopolamine in Major Depressive Disorder (MDD), licensed by MGH to Biohaven. Patents for pharmacogenomics of Depression Treatment with Folate (US_9546401, US_9540691). Copyright: for the MGH Cognitive & Physical Functioning Questionnaire (CPFQ), Sexual Functioning Inventory (SFI), Antidepressant Treatment Response Questionnaire (ATRQ), Discontinuation-Emergent Signs & Symptoms (DESS), Symptoms of Depression Questionnaire (SDQ), and SAFER; Belvoir; Lippincott, Williams & Wilkins; Wolkers Kluwer; World Scientific Publishing Co. Pte. Ltd.
CCF: nothing to disclose CC: personal fees from Janssen, Perception, and Takeda; and grants from Clexio, Livanova, AFSP, and the National Institute of Mental Health.
MHT: research support from the Agency for Healthcare Research and Quality, Cyberonics Inc., National Alliance for Research in Schizophrenia and Depression, NIMH, National Institute on Drug Abuse, National Institute of Diabetes and Digestive and Kidney Diseases, and Johnson & Johnson; consulting and speaker fees from Abbott Laboratories Inc., Akzo (Organon Pharmaceuticals Inc.), Allergan Sales LLC, Alkermes, Astra Zeneca, Axon Advisors, Brintellix, Bristol-Myers Squibb Company, Cephalon Inc., Cerecor, Eli Lilly & Company, Evotec, Fabre Kramer Pharmaceuticals Inc., Forest Pharmaceuticals, GlaxoSmithKline, Health Research Associates, Johnson & Johnson, Lundbeck, MedAvante Medscape, Medtronic, Merck, Mitsubishi Tanabe Pharma Development America Inc., MSI Methylation Sciences Inc., Nestle Health Science-PamLab Inc., Naurex, Neuronetics, One Carbon Therapeutics Ltd, Otsuka Pharmaceuticals, Pamlab, Parke-Davis Pharmaceuticals Inc., Pfizer Inc., PgxHealth, Phoenix Marketing Solutions, Rexahn Pharmaceuticals, Ridge Diagnostics, Roche Products Ltd, Sepracor, SHIRE Development, Sierra, SK Life and Science, Sunovion, Takeda, Tal Medical/Puretech Venture, Targacept, Transcept, VantagePoint, Vivus, and Wyeth- Ayerst Laboratories. |
# Polygenic risk of Social-isolation and its influence on social behavior, psychosis, depression and autism spectrum disorder
## Abstract
Social-isolation has been linked to a range of psychiatric issues, but the behavioral component that drives it is not well understood. Here, a GWAS is carried out to identify genetic variants which contribute to Social-isolation behaviors in up to 449,609 participants from the UK Biobank. 17 loci were identified at genome-wide significance, contributing to a $4\%$ SNP heritability estimate. Using the Social-isolation GWAS, polygenic risk scores (PRS) were derived in ALSPAC, an independent, developmental cohort, and used to test for association with friendship quality. At age 18, friendship scores were associated with the Social-isolation PRS, demonstrating that the genetic factors are able to predict related social traits. LD score regression using the GWAS demonstrated genetic correlation with autism spectrum disorder, schizophrenia, and major depressive disorder. However, no evidence of causality was found using a conservative Mendelian randomization approach other than that of autism spectrum disorder on Social-isolation. Our results show that Social-isolation has a small heritable component which may drive those behaviors which is associated genetically with other social traits such as friendship satisfaction as well as psychiatric disorders.
## Introduction
Social contact is essential for surviving and thriving in human societies1. As such, having limited contact with other people, or Social-isolation, can have detrimental effects on both physical and mental health. There is evidence that lack of social contact is associated with schizophrenia2,3, autism spectrum disorder3, and depression4, as well as with medical conditions such as cardiovascular disease1,5 and diabetes6. Longitudinal studies indicate that Social-isolation can predate mental issues and have a strong causal effect on poor mental health outcomes4,7,8. These issues have been acutely brought to light in the context of the Covid-19 pandemic, in which forced social isolation has had a substantial negative effect on mental health9. Social-isolation has been found to be strongly associated with the development of psychosis, and it has been hypothesized that this contribution may be due to negative, delusional or paranoid thoughts not being tested in reality and therefore corrected in social interactions10,11 Despite the impact of Social-isolation on mental and physical health, it remains one of the least studied factors in psychiatric disorders, limiting understanding of aetiology and causality with regards to psychiatric disorders 12,13,14,15. Associations between genetics and traits related to social contact such as feelings of loneliness (feelings of distress or discomfort from being alone) and sociability (the ability to connect and socialize with others) have been noted16. However, the existence and influence of an exclusive genetic predisposition towards Social-isolation behaviors, i.e., action that leads to isolation, is yet to be established. Consequently, there is a fundamental gap in our knowledge about the extent to which Social-isolation may represent a causal and independent risk for poor mental and physical health instead of being merely a direct consequence of other (clinical) symptomatology, for example due to stress or feelings of paranoia.
Twin studies have demonstrated that there is a similar genetic influence on both social isolation ($40\%$) and loneliness ($38\%$), but that they are only moderately genetically correlated, reflecting partially distinct constructs17. However, to our knowledge no prior study has carried out a genome-wide association study (GWAS) to elucidate the polygenic component of the purely behavioral aspects of Social-isolation, as distinct feelings relating to social behavior such as loneliness. This is pertinent, as these behaviors could provide modifiable early intervention targets if found to be on the causal pathway between inherited genetic variation and psychiatric disorders18.
In order to better understand the genetic factors that influence Social-isolation, the present study [1] conducted a novel GWAS for Social-isolation behavior in the UK Biobank cohort; [2] Polygenic Risk Scores (PRS) were derived from this GWAS for individuals in the Avon Longitudinal Study of Parents and Children (ALSPAC, UK) and used to examine associations with social traits for GWAS validation; [3] the genetic correlation between Social-isolation and psychiatric disorders was examined using GWAS results from the Psychiatric Genomics Consortium (PGC), and [4]). Finally, Mendelian Randomization (MR) was applied to estimate causal effects between Social-isolation and psychiatric disorders.
## GWAS
To investigate genetic propensity towards social isolation behavior (Social-isolation), a GWAS was performed in the UK Biobank, based on a composite of 4 self-reported behavioral traits pertaining to this behavior. LD Score regression revealed that the individual traits were genetically correlated (see Supplementary Table 4). These were meta-analyzed with MTAG before being conditioned on schizophrenia (SCZ), major depressive disorder (MDD), and autism spectrum disorder (ASD), using mtCOJO. The initial GWAS identified 19 loci, post-conditioning 17 loci remained at genome-wide significance ($P \leq 5$x10−08; see Fig. 1).
The majority of the SNPs found to be associated with Social-isolation were not previously associated with psychiatric or neurodevelopmental disorders. However, there are several exceptions. For example, the top lead SNP (rs67777906; $$P \leq 1.80$$x10−15) is situated in the ARFGEF2 gene, implicated in distinguishing between bipolar disorder (BD) and SCZ32, as well as post-traumatic stress disorder (PTSD)33,34. The second top SNP in chromosome 8, and the fourth top hit overall (rs2721942; 1.47x10−10), has also been associated with Post Traumatic Stress Disorder (PTSD)35. In chromosome 19, the lead SNP (rs28567442; $$P \leq 6.31$$x10−10) is embedded in ZNF536, implicated in the development of the forebrain, and associated with SCZ36. Other genome-wide significant SNPs are in genes associated with SCZ (rs6125539; 4.72x10−09; CSE1L)32 and impulsivity (rs1248860; 9.51x10−09; CADM2)37. In chromosome 13, rs17057528 ($$P \leq 8.82$$x10−09) is in DIAPH3, identified as an autism risk gene38, and is also implicated in hearing loss and impairment of speech perception39.
## ALSPAC
To validate the Social-isolation GWAS and PRS in an independent cohort, as well as explore its generalizability to a developmental cohort, PRS were generated in ALSPAC using the 13 significance thresholds for SNP inclusion. The PRS were used to examine associations with friendship scores, comprising the 5 items relating to peer contact in $$n = 4$$,934 (at age 12) and $$n = 2$$,909 (at age 18) participants of the ALSPAC cohort.
The Social-isolation PRS were not associated friendship scores at age 12. At age 18, friendship score was significantly associated with the Social-isolation PRS at the PT = 0.05 and PT = 0.1 threshold, with the latter being the most strongly associated (r2 = 0.006, $$P \leq 0.001$$; see Supplementary Tables 6 and 7 for full results). The fewer SNPs were included, the less the predictive the model in terms of p-value, with the genome-wide significant only SNPs not associated with the friendships scores. This demonstrates the signal included in SNPs that did not reach genome-wide significance in contributing towards social isolation behavior.
## LD score regression
LD score regression was performed to investigate genetic correlations between Social-isolation in the UK Biobank and schizophrenia (SCZ), major depressive disorder (MDD) and autism spectrum disorder (ASD) from the Psychiatric Genomics Consortium (PGC). All 3 psychiatric disorders were correlated with Social-isolation, with ASD having the strongest genetic correlation (rg = 0.23, SE = 0.048, $$P \leq 2.25$$x10−06), followed by SCZ (rg = 0.102, SE = 0.028, $$P \leq 0.0002$$) and MDD (rg = 0.093, SE = 0.035, $$P \leq 0.009$$). The results indicate that Social-isolation genetics are associated with the genetics of these psychiatric disorders and may form part of the genetic basis for them. This could occur if the genetics of Social-isolation have downstream effects on behavior that could increase risk of symptoms and eventual diagnosis, or if the diagnosis itself leads to increased Social-isolation. Using LD score regression, the SNP-heritability of Social-isolation after conditioning on psychiatric disorders was estimated to be h2 = 0.04 (SE = 0.0022, $$P \leq 8.95$$x10−77), suggesting a small but significant SNP-based heritable component.
Genetic correlations and heritability estimates were conducted using LD score regression31, to investigate associations between Social-isolation and SCZ, MDD, and ASD, using GWAS summary statistics from the Social-isolation GWAS conducted in the UK Biobank and each psychiatric disorder from the Psychiatric Genomics Consortium (PGC).
## Mendelian randomization
Using the MR-Egger method, there was no evidence of causal relationships between Social-isolation and psychiatric disorders with the exception of ASD having a causal effect on Social-isolation when SNPs were selected as instruments at the 5x10−05 threshold (Beta = 0.019, SE = 0.0052, $$P \leq 0.00041$$). However, MR-*Egger is* a conservative method and may be underpowered to detect causal associations in complex behavioral traits. See Supplementary Table 8 for full results.
To test for causality between Social-isolation and psychiatric outcomes, bi-directional Mendelian Randomization was conducted using the package TwoSampleMR (https://github.com/MRCIEU/TwoSampleMR). Instrumental variables for the exposures (both Social-isolation and the psychiatric disorders SCZ, MDD, and ASD) were extracted at genome-wide significance and at $p \leq 5$x10−05 after strict LD clumping at 10,000kb windows and LD r2 < 0.001 to ensure instruments were independent. Exposure and outcomes were harmonized and MR-Egger was used in the primary analyses to account for horizontal pleiotropy. The inverse variance weighted (IVW) was also used as a less conservative, more powerful approach. To account for multiple testing, a Bonferroni corrected p-value threshold of $P \leq 0.004$ was used to ascertain significance.
## Discussion
This is the first study of the genetic factors that contribute to the behavior of social isolation. A GWAS identified 17 genetic loci which predispose towards social isolation behavior. Some of these were in genes previously associated with psychiatric and neurological disorders, as well as neurotransmitter and brain function. However, most were not previously found to be associated with other mental health, neurodevelopmental, or personality traits. Polygenic risk scores (PRS) derived from the GWAS were associated with the friendship scores at age 18 and there was strong evidence supporting shared genetic etiology between Social-isolation and major psychiatric disorders, based on genetic correlations.
The PRS generated in ALSPAC were associated with friendship scores at age 18 but not at age 12. These results suggest Social-isolation GWAS is valid indicator of social-related traits, with higher PRS associated with worse friendship satisfaction and outcomes. The PRS being associated scores at age 18 as opposed to 12 might indicate that genetically influenced personal social behavior does not necessarily manifest until later in adolescence. This finding could be due to confounding by gene-environment correlation40. At younger ages, children may have less control over their own social environments and interactions than at age 18, as their parents would likely select their environments for them, in which case behavior would be less strongly influenced by their own genetic predispositions. A similar effect is observed in intelligence genetics, in which heritability increases over time41. It is considered that genetic predisposition leads to active and passive correlations with school selection or teacher attention, for example, creating a “snowball” effect in which those genetic influences are amplified over time. It is possible that similar effects are at play with behavioral genetics, in which Social-isolation genetic predisposition lead to development, or lack thereof, of social skills and sociability, modulating social isolation over time.
Social-isolation was found to be genetically correlated with SCZ, as well as with ASD and MDD. This pattern of results suggest that Social-isolation is a feature that cuts across multiple psychiatric disorders and mental health generally. It is well known that social isolation is linked to poorer mental health42, but here it is shown that there is a genetic association which indicates that Social-isolation may form part of the aetiological basis of these disorders. Further studies with psychiatric cases will be required to test this hypothesis, but considering that social engagement is an easily modifiable intervention target43, identifying those with a genetic predisposition towards Social-isolation may be a useful strategy in mitigating mental health issues.
The current study was able to demonstrate a heritable genetic component to Social-isolation by utilizing a large sample size and detailed phenotype information, allowing a comprehensive and valid Social-isolation trait to be developed. This was confirmed by the PRS generated from the GWAS being validated in an independent sample, and several genome-wide significant SNPs found associated with Social-isolation. However, LD Score Regression only estimated $4\%$ heritability for Social-isolation and PRS were only able to explain $0.6\%$ of the variance in friendship scores in ALSPAC replication sample. The SNP-heritability is likely to be a lower bound estimate, as this only takes into account the common SNPs genotyped and not rare variants or de novo mutations44. Further, despite having up to 450,000 individuals available for the GWAS, the most powerful GWAS such as educational attainment are becoming increasingly predictive with approximately 3 million participants45. Thus, increasing sample size will allow the detection of more SNPs that contribute to Social-isolation behavior and increase both heritability estimates and the predictive power of PRS. In ALSPAC the target sample also had relatively few participants at age 18 ($$n = 2$$,909) compared to age 12 ($$n = 4$$,934), which likely contributed to lower bound variance explained.
In order to further investigate how the genetic component of Social-isolation manifests in behavior and the development of psychiatric disorders, further studies will be required which investigate whether or not Social-isolation PRS are able to predict case control status for disorders such as SCZ, MDD and ASD. If so, it will be necessary to consider which specific behaviors are influenced by genetics, and how these manifest in the development and diagnosis of psychiatric disorders. By targeting behavior, our present study has laid the foundation for identifying a possible target for intervention that can be addressed in real world scenarios. However, the relatively small effect sizes of individual SNPs and the resulting low predictive power of PRS mean further investigation is necessary.
## Discovery sample
The UK Biobank (UKB) is a detailed prospective study with over 502,650 participants aged 40–69 years when recruited in 2006–2010, and includes both genetic and phenotypic data on complex traits19. The recruitment process was coordinated around 22 centers in the UK (between 2007 and 2010). Individuals within travelling distance of these centers were identified using NHS patient registers (response rate = $5.47\%$). Invitations were sent using a stratified approach to ensure demographic parameters were in concordance with the general population. All participants provided written informed consent and the current study was ethically approved by the UK Biobank Ethics and Governance Council (REC reference 11/NW/0382; UK Biobank application reference 18177).
## Genetic data
Blood samples from 488,366 UK Biobank participants were genotyped using the UK BiLEVE array or the UK Biobank axiom array. Further details on the genotyping and quality control (QC) can be found on the UK Biobank website (http://www.ukbiobank.ac.uk/scientists-3/genetic-data/). In the current study, SNPs were removed if they had missingness < 0.02 and a minor allele frequency (MAF) < 0.01. Exclusions based on heterozygosity and missingness were implemented according to UK Biobank recommendations (http://biobank.ctsu.ox.ac.uk/showcase/label.cgi?id=100314). Samples were removed if they were discordant for sex. SNPs deviating from Hardy-*Weinberg equilibrium* (HWE) were removed at a threshold of $P \leq 1$x10−8. Genotype data was imputed according to standard UK Biobank procedure, on 487,442 samples20, excluding variants with an MAF < 0.01 and an imputation quality score < 0.3. After basic QC procedures and exclusions, 488,337 samples with phenotype data remained for genetic analysis. Excluding those of non-European ancestry using 4-mean clustering on the first two principal components, 449,609 samples remained for genetic analysis.
## Social isolation:
To derive a comprehensive measure of Social Isolation (Social-isolation), we ran a data-driven principal component analyses (using Promax rotation) on self-reported answers to questions that [1] directly probed for the quantity or quality of social engagement, and [2] were available for at least $90\%$ of study participants. Based on these criteria, we included data on the following 3 items, that all loaded on a single factor: “Frequency of family/friend visits”, “Being able to confide in others”, and “Number of social activities a week”. The items “Frequency of family/friend visits” and “Being able to confide with others” were both rated on a seven-point Likert scale (i.e. ‘Almost daily’, ‘2–4 times a week’, ‘about once a week’, ‘about once a month’, ‘once every few months’, ‘never or almost never’, and ‘no friends/ family outside of household’). The items “Frequency of family/friend visits” and “Being able to confide with others” were considered continuously and recoded so that higher values corresponded to greater social isolation. Answer options for the item “Number of a/social activities a week” included attending a sports club, pub, social club, religious group, adult educational classes, or other group activities and were summed to represent the ‘total number of social activities a week’, also considered continuously.
To complement the answers to the self-report, sociodemographic information about the number of people in the household was added as additional proxy of social contact. This “Number in household” item was dichotomized as a binary trait representing living alone, with 0 others in household coded as ‘1’ for Social-isolation and any greater number in household as ‘0’. See supplementary material for full phenotype and coding details. For all items, individuals with missing data, or who preferred not to answer were excluded. Participants who were wheelchair users or morbidly obese (BMI > 40) were also excluded from the analysis, as these factors may arguably hamper the level of social activity, but are unrelated to genetic or psychiatric vulnerability.
## ALSPAC cohort
The Avon Longitudinal Study of Parents and Children (ALSPAC) is a prospective birth cohort which recruited pregnant women with expected delivery dates between April 1991 and December 1992 from Bristol UK. 14,541 pregnant women were initially enrolled with 14,062 children born and 13,988 alive at 1 year of age. Detailed information on health and development of children and their parents were collected from regular clinic visits and completion of questionnaires. Please note that the study website contains details of all the data that is available through a fully searchable data dictionary and variable search tool” and reference the following webpage: http://www.bristol.ac.uk/alspac/researchers/our-data/. A detailed description of the cohort has been previously published21,22. Ethical approval for the study was obtained from the ALSPAC Ethics and Law Committee and the Local Research Ethics Committees.
## Genotype data
9,115 participants in ALSPAC have genotype data available, after individuals with non-European ancestry were removed. ALSPAC children were genotyped using the Illumina HumanHap550 quad chip genotyping platforms. SNPs with a MAF of < 0.01, a call rate of < 0.95 or evidence for violations of Hardy-*Weinberg equilibrium* ($P \leq 5$x10−07) were removed. Data was imputed using standard ALSPAC procedure using the HapMap 2 reference panel, keeping SNPs with MAF > 0.02 and an INFO score > 0.9. This resulted in 4,731,235 SNPs in the analysis. Full quality control procedures can be found at: https://alspac.github.io/omics_documentation/alspac_omics_data_catalogue.html
## Phenotype data
To test the validity of the Social-isolation construct, 2 friendship scores were derived from 5 questions from clinical questionnaires based on questions from the Cambridge Hormones and Moods Project Friendship Questionnaire23, completed by the parents of offspring at ages 12 and 18 respectively e.g. “*Teenager is* happy with number of friends”. Each question consisted of 4 to 6 categorical responses, corresponding to a 4 to 6 point scale e.g. “1 = Very happy, 2 = Quite happy, 3 = Quite unhappy, 4 = Unhappy, 5 = No friends”. Responses were summed to create a continuous scale, with higher scores corresponding to lower friendship quality and greater Social-isolation. 4, 934 of the cohort had the phenotype information at age 12, and 2,909 at age 18. See supplementary table 2 for full details on questions.
## GWAS summary statistics
To test for genetic correlations between Social-isolation and associated psychiatric disorders using LD score regression, the Social-isolation GWAS based on UK *Biobank data* was used along 3 base genome-wide association summary statistics for schizophrenia (SCZ), depression (MDD), and autism spectrum disorder (ASD). These were the Psychiatric Genomics Consortium Wave 3 (PGC3) SCZ GWAS24, the 2019 PGC MDD Working Group GWAS25, and the 2017 PGC ASD Working Group GWAS26.
## GWAS analysis
Association testing of autosomal SNPs was carried out on each of the 4 Social-isolation traits using BOLT Bayesian linear mixed models (BOLT-LMM)27 to account for relatedness and cryptic population stratification, while increasing power and controlling for false positives. Age, sex, batch, and center were included as covariates, as well as education, income, and Townsend deprivation index (TDI) to account for socio-economic status (SES). The top 15 principal components (PCs) were also included to control for main population stratification. MTAG28 was used to meta-analyze the individual “Frequency of family/friend visits”, “Being able to confide in others”, “Number of social activities a week”, and “Number in household” outcomes to form a single, composite Social-isolation GWAS. This score is achieved by leveraging power across correlated GWAS estimates in overlapping samples. Finally, multitrait-based conditional and joint analysis (mtCOJO)29 was used to adjust the Social-isolation GWAS summary statistics for the effects of psychiatric disorders, specifically schizophrenia (SCZ), major depressive disorder (MDD), and autism spectrum disorder (ASD), using European ancestry GWAS summary statistics for each. These are the psychiatric disorders which are commonly considered to lead to increased risk of social withdrawal and isolation2,3,4,7,8 and were conditioned on to remove potential downstream effects of psychiatric disorders. SNPs were selected as instruments at 5x10−05, clumped 1MB apart or with LD r2 < 0.2 based on the 1000 Genomes Project Phase 3 reference panel for independence. mtCOJO uses these SNPs Generalized Summary-data-based Mendelian Randomization (GSMR) to estimate the effect of the exposures (psychiatric disorders) on the outcome (Social-isolation), producing conditioned effect sizes and p-values. Statistically significant independent signals were identified using 1MB clumping and a genome-wide significance threshold of $P \leq 5$x10−08.
## Polygenic risk score analysis
Polygenic risk scores (PRS) were generated in ALSPAC using PRSice-230, using the Social-isolation GWAS to sum and weight risk alleles for individuals in each cohort. Social-isolation GWAS results were pruned for linkage disequilibrium (LD) using the p-value informed clumping method in PLINK (--clump-p1 1 -- clump-p2 1 --clump-r2 0.1 --clump-kb 250). This method preferentially retains SNPs with the strongest evidence of association and removes SNPs in LD (r2 > 0.1) that show weaker evidence of association within 250Kb windows, based on LD structure from the HRC reference panel. Subsets of SNPs were selected from the results at 13 increasingly liberal P value thresholds (ranging from $p \leq 5$x10−08, to $p \leq 0.5$). Risk alleles were included and tested to predict outcomes at 13 different significance thresholds, allowing the utilization of the most predictive PRS and threshold. These PRS were tested for associations with the friendship scores in ALSPAC, using linear regression models and including age, sex and 10 PCs as covariates. To account for the multiple testing of 13 PRS thresholds and 2 friendship scores, a Bonferroni correct significance threshold of $P \leq 0.002$ was used.
## Data availability
UK *Biobank data* are available through a procedure described at http://www.ukbiobank.ac.uk/using-the-resource/ ALSPAC data access is through a system of managed open access. The steps below highlight how to apply for access to the data included in this paper and all other ALSPAC data.
If you have any questions about accessing data, please alspac-data@bristol.ac.uk.
Schizophrenia, *Autism spectrum* disorder, and major depressive disorder GWAS summary statistics are publicly available from the PGC (https://www.med.unc.edu/pgc/download-results/)
## Code availability
Software code for PRSice-2 is available at https://www.prsice.info/. All other code used is available upon request. |
# Measurement of Postoperative Quality of Pain in Abdominoplasty Patients—An Outcome Oriented Prospective Study
## Abstract
[1] Background: Postoperative pain is a frequently underestimated complication significantly influencing surgical outcome and patient satisfaction. While abdominoplasty is one of the most commonly performed plastic surgery procedures, studies investigating postoperative pain are limited in current literature. [ 2] Methods: *In this* prospective study, 55 subjects who underwent horizontal abdominoplasty were included. Pain assessment was performed by using the standardized questionnaire of the Benchmark Quality Assurance in Postoperative Pain Management (QUIPS). Surgical, process and outcome parameters were then used for subgroup analysis. [ 3] Results: We found a significantly decreased minimal pain level in patients with high resection weight compared to the low resection weight group ($$p \leq 0.01$$ *). Additionally, Spearman correlation shows significant negative correlation between resection weight and the parameter “Minimal pain since surgery” (rs = −0.332; $$p \leq 0.013$$). Furthermore, average mood is impaired in the low weight resection group, indicating a statistical tendency ($$p \leq 0.06$$ and a Χ2 = 3.56). We found statistically significant higher maximum reported pain scores (rs = 0.271; $$p \leq 0.045$$) in elderly patients. Patients with shorter surgery showed a statistically significant (Χ2 = 4.61, $$p \leq 0.03$$) increased claim for painkillers. Moreover, “mood impairment after surgery” shows a dramatic trend to be enhanced in the group with shorter OP duration (Χ2 = 3.56, $$p \leq 0.06$$). [ 4] Conclusions: While QUIPS has proven to be a useful tool for the evaluation of postoperative pain therapy after abdominoplasty, only continuous re-evaluation of pain therapy is a prerequisite for constant improvement of postoperative pain management and may be the first approach to develop a procedure-specific pain guideline for abdominoplasty. Despite a high satisfaction score, we detected a subpopulation with inadequate pain management in elderly patients, patients with low resection weight and a short duration of surgery.
## 1. Introduction
Being essential for postoperative complications, morbidity, mortality as well as rehabilitation capacity, guideline-based pain therapy has become an integral part for almost all surgical disciplines [1].
While not only perioperative morbidity was found to be reduced by adequate pain medication, several studies describe a significant decrease in complications with a verifiable reduction of hospitalization days [1,2].
Therefore, postoperative pain management is essential not only for individual patients but chronification of underestimated postoperative pain represents an economic burden, with enormous potential for optimization.
Pain management can be divided into non-medicinal and medicinal factors.
While non-medicinal factors include psychological and physical procedures, such as the application of cold to reduce the swelling of an extremity after postoperative decongestion of an extremity after surgery, medical factors mainly focus on systemic pharmacotherapy.
Among these, based on international guidelines, treatment of severe to moderate pain should be based on a combination of opioids (tramadol, piritramide) and non-opioid analgesics (paracetamol, metamizole, NSAIDs, COX-2 inhibitors) [3].
Although this seems to be standardized, many studies have shown insufficient pain management [4,5]. For further standardized assessment and improvement, the “Quality Improvement in Postoperative Pain Therapy” (abbreviated as follows: “QUIPS”) as an interdisciplinary project was initiated. Being the world’s largest acute pain registry and including data on process and quality outcomes, this system allows collection, evaluation and improvement of acute pain therapy in participating institutions [6].
Although it is undoubted, that adequate pain management is essential for individual outcome, there exist almost no studies evaluating postoperative pain in plastic surgery. Especially in semi-elective surgeries, such as body contouring surgeries, characterized by large wound areas, postoperative well-being can lead to faster mobilization and reduction of hospitalization time. Nevertheless, hardly any literature is available on this topic.
Single case reports describe the existence of neuropathic pain syndromes of the N. iliohypogastricus and cutaneous femoris lateralis after abdominoplasty and their avoidability [7]. Feng et al. describe the reduction of pain by combination of local nerve blocks during abdominoplasty surgery [8].
Regarding pain medication, a recommendation of reduced opioid consumption after abdominal wall surgery can be found, but there exist no concrete guidelines and this recommendation has not been evaluated [9].
To sum up, although postoperative pain management has been excessively described to be of utmost importance for outcome and well-being, no standardized study within body contouring patients has been carried out up to now.
Therefore, we used QUIPS in abdominoplasty patients for analyzing pain characteristics as well as to define risk factors for enhanced postoperative pain.
## 2. Materials and Methods
This study was carried out following the guidelines of the declaration of Helsinki as well as by the dean of the university.
## 2.1. Inclusion and Exclusion Criteria
All patients undergoing abdominoplasty according to Pitanguy at the Department for Plastic and Aesthetic Surgery, Reconstructive and Hand Surgery at the Markus Hospital in Frankfurt am Main from January 2010 to December 2015 were included in this study.
For the study, all patients were excluded who underwent combination or revision procedures such as autologous breast reconstruction or repair of rectus diastasis.
## 2.2. Data Collection
A standardized pain questionnaire (QUIPS) was carried out on postoperative day one by a single study nurse focusing on outcome (Appendix A) and processing parameters (Appendix B) using a Numeric Rating Scale from 0 to 10, or a dichotomous Yes/No categorization. This questionnaire was filled out manually under surveillance. Additionally, preoperative anesthesiologic assessments were screened for general data such as age, sex, weight and for specific risk factors, comorbidities and ASA score. Surgical protocols were screened for resection weight as well as for surgery time.
## 2.3. Surgical Procedure
All surgical procedures were carried out by one senior doctor with one or two residents. The surgical procedure was standardized to prevent any technical related bias: Preoperatively, the patient is marked in a standing position to define the resection lines. After proper positioning, the surgical area is sterilely covered. The skin incision is made with the scalpel and the subcutaneous preparation by using the monopolar diathermy. The belly button is incised and sutured cranially with silk as holding suture. Epifascial dissection of the fat-skin soft tissues below and above the umbilicus up to the xiphoid while sparing the lateral sub- and intercostal perforator vessels. Hereby no focus is given on nerve sparing. Insertion of wound drains and drainage at the mons pubis. Collapsing the patient at the hip and re-defining the resection area. Resection of the skin fat flap and adaptation of the skin and tissue with Vicryl 2-0 and 3-0.Placement of the new umbilical position and suturing of the umbilicus with Vicryl 4-0 subcutaneous and Prolene 4-0.Wound closure with continuous intradermal Biosyn 3-0 suture and sterile wound dressing, abdominal belt.
## 2.4. Pain Mangement
All patients received pain medication via a standardized protocol following the official German guidelines for pain management (Appendix C) [3].
## 2.5. Statistical Analysis
Data analysis was performed using SPSS version 22.0 (IBM Corporation, New York, NY, USA). Initially, the database created by QUIPS was completed with the operation- and patient-related data. After conversion, analysis of descriptive data was performed. All data are given as mean and standard deviation (=SD). Nominal-distributed data were analyzed using Pearson’s chi-square test, Fisher’s exact test, and Spearman-rho correlation.
To analyze variables and individual subgroups of this population, a univariate and multivariate correlation analysis as well as the Mann Whitney-U test and the Kruskal Wallis test were used. The median was used for division of groups. A p level of <0.05 was considered as statistically significant.
## 3.1. Demography
In total, 268 patients underwent abdominoplasty within the given timeframe. Further, 110 were excluded due to surgical technique. Of the remaining 158 patients, 55 showed a complete QUIPS and gave written consent for participation. Among those, 41 ($75\%$) were female and 14 ($25\%$) were male, aged between 21 and 67 years. Mean age and mean height was 42.93 ± 9.9 and 169.22 cm ± 8.13 cm, retrospectively. Average weight of patients was 87.05 kg ± 19.73 kg.
## 3.2. Surgical Procedures
Average resection weight was 2913 g ± 2226 g and average duration of surgery was 129.49 ± 37.48 min, with 215 min representing the longest and 56 min the minimal surgical time.
## 3.3. Preoperative Measurements
Patients were preoperatively classified using the ASA sore. Thereby, five ($9\%$) subjects were categorized as ASA-I, 42 ($76\%$) as ASA-II, and eight ($15\%$) as ASA-III. No ASA-IV or V subjects underwent surgery. In total, seven ($13\%$) patients stated regular intake of painkillers before surgery, due to chronic diseases.
## 3.4. QUIPS Outcome Parameters Overall
Table 1 depicts the overall outcome results in our population.
Mean pain on exertion was reported as 4.42 ± 1.54, with the maximum pain being 5.35 ± 2.04 and the minimum pain 1.95 ± 1.43. Thirty-eight of the 55 patients ($69\%$) exceeded the pain level of 4, which normally is considered as the tolerance pain threshold for demanding pain killers. Patient satisfaction was reported to be 11.95 ± 3.03 in average.
In terms of mobility, 34 ($62\%$) patients reported being significantly limited due to pain. When breathing or coughing, 27 ($49\%$) mentioned pain while they in- or exhaled. Thirteen of the respondents ($24\%$) felt their sleep was disturbed and 12 ($23\%$) reported their mood being affected by postoperative pain. Twenty-six ($47\%$) of all respondents reported postoperative pain fatigue. Both nausea and vomiting were mentioned by only 12 ($23\%$) and nine ($13\%$), respectively. Chronic pain was previously described by seven ($13\%$) of the total collective.
Nevertheless, only 13 ($24\%$) demanded extra painkillers. In addition, 26 ($47\%$) of all respondents felt postoperative pain fatigue.
## 3.5. QUIPS Process Parameters Overall
Preoperatively, midazolam 7.5 mg per os was offered to all patients as a sedative but was taken only by six patients ($11\%$).
For intraoperative pain relief, an opioid (Sufentanil) was used in all except five ($9\%$) cases. Non-opioids were used in seven patients ($13\%$).
In the postoperative care unit, opioids were used in 53 ($96\%$) patients and in 35 ($64\%$) patients, non-opioids were injected intravenously.
Postoperatively, midazolam was injected in 35 ($64\%$) cases. Diclofenac was used as a second-line agent in five cases ($9\%$). In addition, 15 ($27\%$) patients did not require any analgesics. In the majority of subjects ($$n = 31$$, $56\%$), no opioid was needed. However, in 24 patients ($44\%$), Piritramid 7.5 mg i.v. was used postoperatively.
## 3.6. Influence of Surgical Parameters on Postoperative Pain Outcome Parameters
For a profound analysis of the quality of postoperative pain therapy, subpopulations were created based on surgical parameters and patient characteristics. Therefore, means were used as dividing factors: Resection weight: Mean weight is 2180 g, Age: Mean age is 43 years, Duration of surgery: Mean operation time 125 min.
## 3.6.1. Resection Weight
When considering the factor “resection weight”, we found significant decreased minimal pain in patients with high resection weight compared to the low resection weight group ($$p \leq 0.01$$) as shown in Table 2. Additionally, Spearman correlation analysis shows a significant negative correlation between resection weight and the parameter “Minimal pain since the Surgery” (Spearman Rho coefficient rs = −0.332; significance value $$p \leq 0.013$$ *).
Additionally average mood was impaired in the low weight resection group. A p-value of 0.06 and a Χ2-value of 3.56 indicate a trend without being statistically significant.
## 3.6.2. Age
With a range of 21 to 67 years, the average age is 42.93 ± 9.9 years with a median of 43 years.
We found statistically significant higher maximum reported pain scores (Spearman Rho coefficient rs = 0.271; significance value $$p \leq 0.045$$ *) in older patients, indicating enhanced pain within this group as shown in Table 3.
## 3.6.3. Duration of Surgery
The average operation time in this study was 129.49 ± 37.48 min, with a range of 159 min and a median of 125 min.
Table 4 shows that Patients with a shorter surgery had a statistically significant (Χ2 = 4.61, $$p \leq 0.03$$ *) increased claim for painkillers. Additionally, “mood impairment after surgery” showed a dramatic trend to be enhanced in the group with shorter OP duration (Χ2 = 3.56, $$p \leq 0.06$$).
## 4. Discussion
Aimed at a redefinition of the body contour, abdominoplasty is performed by wide undermining of tissue of the abdominal wall with its high density of thoracolumbal sensitive nerves [10]. Due to this fact, it remains uncertain how much nerve injury is caused by wide preparation. Pogatzski-Zahn emphasize that intraoperative nerve irritation can cause chronic pain syndromes [11] and nerve injury during this procedure represents an underestimated problem [12].
Despite SOPs and guidelines, as well as an improved patient education, pain is constantly considered as the fifth vital sign [13] and its postoperative management is far from being sufficient [5,6].
With a few exceptions in plastic surgery, there exists no literature of postoperative pain management in standard procedures. One of the reasonable tools for pain relief, published with a small cohort, is additional regional nerve block of the area of interest (e.g., breast [14] or abdomen [15]) or the use of lidocaine-infusing pain pumps [16]). These steps lead to reduced hospital stay, overall pain reduction and a reduced pain medication compared to the control group [17].
Nevertheless, literature of an analysis of pain quality in abdominoplasty with its outcome parameters as a benchmarking tool for evaluation is missing.
Therefore, for the first time, we implemented QUIPS for the analysis of plastic surgery pain management.
In our analysis, the mean maximal pain intensity overall was 5.35 out of 10 in a NRS. Pain levels in 38 subjects ($69\%$) were above a value of 4 and according to the S3-guidelines of perioperative pain treatment [18], therefore needed to be addressed for prevention of long-term functional impairment.
By comparing our cohort result of maximal pain intensity with other different common procedures, such as an appendectomy 5.20 or a functional endoscopic sinusitis surgery 3.96, this level shows a relatively high maximum pain level. Nevertheless, in comparison to traumatological procedures (such as the cruciate ligament-plasty up to 6.0 out of 10), it shows lower maximum pain intensity [19].
We used the median to split participants into two subpopulations regarding their resection-weight, age and duration of surgery, being consistent with our clinically experience.
The cohort with medical indication can be divided into patients with a relevant, functional impairing dermatochalasis of the abdomen, a longer operation time, a higher ASA status and generally the younger patient after bariatric surgery or self-induced massive weight loss.
In contrast to the previous mentioned, the aesthetic patient population comes along with a moderate low resection weight, less ptosis of skin, normally younger women after pregnancy asking for a mommy makeover or the “best ager”, who are older but healthier with a profound focus on their outward appearance and self-motivated initiative for a tummy tuck.
A direct comparison between those two groups is desired and needs further research.
Nevertheless, we evaluated the outcome parameters in relation to the patient specific parameters of our subgroups.
A higher resection weight correlates with a higher bodyweight and this is regarded as a predictor for a higher complication rate in abdominoplasty [20,21].
Interestingly, none of these studies analyzed the influence of the factor “resection weight” itself (in our study ranging from 610 g up to 9600 g).
In contrast to our prediction, our observations show that high resection weight goes along with significantly less pain markers and the mood impairment of patients with low resection weights.
There are no data in literature which confirm those findings for this procedure.
In reduction mammaplasty, Strong et al. found that patients with higher resection weights have significantly less pain than those with low resection rates [22].
A reasonable explanation of our results is the decreased sensibility of the abdominal wall in patients with a large apron of fat and high resection weight. We assume that patients with hanging bulge of skin and fat tissue have a higher basic pain level and they tolerate postoperative surgical pain better than in lower resection weights [23]. We can further postulate that a chronic local hypoxia, with a consecutive increasing lactate level and a lowering of the pH-value in the apron of fat, elevates the excitation threshold of the peripheral nervous system in this local bulge, causing delayed or even failing to trigger an action potential. This hypothesis has been initially drawn by Kim et al., postulating an ischemic-related pain mechanism when showing a significantly elevated lactate level in postoperative wounds [24].
However, one has to be aware that resection weight can be high in patients with massive weight loss with normal BMI.
The correlation analysis shows that older patients have significantly higher maximal pain levels, which is contrary to current literature, emphasizing young age to be a risk factor for postoperative pain [11,25,26,27]. A higher tolerance of pain is awarded to older patients with reduced analgetic consumption, reduced nociceptive activity and lower needs for morphine medication [28]. Elderly patients vary in distribution of medication, metabolism and excretion of pain medication compared to young people [29].
Morphology-wise, older skin shows dermal atrophy with less perfusion, less elastic fibers and less Meissner and Vater Pacini bodies, resulting in a lower tactile and pressure sensibility [30,31]. These factors cause decreased tolerance of shear forces and tension taking place in an abdominoplasty. Young age as a risk factor for developing pain could not be confirmed in our study.
Enlarged duration of surgery is broadly accepted as a risk factor for pain as well [24,32]. Interestingly, patients with lower operation time have significantly higher desire for pain medication and a high tendency of mood disturbance. While those probands with lower operation time mostly have a lower resection weight, reflecting aesthetic abdominoplasty cases, the expectations for this procedure might be higher due to their payment and probably the conditions of their pain-ranking are much stricter than those in the insurance paid comparative group. Additionally, due to shorter surgery-time, subjects mobilize themselves earlier and this might lead to wound tension, with worse pain outcome portrayed by an increased claim for painkillers and mood disturbance [33]. The removal of the suction drain itself causes pain, discomfort and anxiety [34]. The survey took place on the first postoperative day. Patients with shorter duration of surgery are probably more likely to get rid of suction drains earlier in temporary connection to the QUIPS interview, causing high pain intensity.
The statement: “Longer operation time is a predictor for increased postoperative pain” can be rejected in our investigation.
Limitation is the relatively small cohort and a single center study, as well as a relatively small time frame, which needs to be verified in ongoing investigations. Additionally, due to the structure of QUIPS, no conclusion can be drawn about pre-existing chronic disorders influencing postoperative pain medication.
## 5. Conclusions
Abdominoplasty is a standardized procedure that is, due to its large wound area, well suited for the evaluation of pain levels. QUIPS has proven to be a successful tool for the first elevation of postoperative pain quality in abdominoplasty procedures.
Despite a high overall satisfaction score, we detected a subpopulation with inadequate pain management in elderly patients, patients with low resection weight and a short duration of surgery. To what extent newly found risk factors will be deemed suitable to adapt tailored pain management requires further study to pave the way for a procedures-specific pain guideline. |
# Evaluation of antioxidant properties of nanoencapsulated sage (Salvia officinalis L.) extract in biopolymer coating based on whey protein isolate and Qodumeh Shahri (Lepidium perfoliatum) seed gum to increase the oxidative stability of sunflower oil
## Abstract
Sage leaf extract (SLE) is considered an excellent source of bioactive compounds mainly because of its high content of phenolics, widely known as natural antioxidants. This study aimed to compare the performance of free/encapsulated SLE by different coatings in protecting sunflower oil against oxidative deterioration. The coating materials were whey protein isolate and qodumeh seed gum at different ratios (1:0, 1:1, and 0:1). Each nanocapsule was analyzed for particle size, zeta potential, encapsulation efficiency, phenolics release, and SEM images. The total phenolic compounds of SLE were 31.12 mg GA/g. The antioxidant activity of SLE was increased in both DPPH and FRAP assays by increasing extract concentration from 50 to 250 ppm. All nanoparticles exhibited nanometric size, negative zeta potential, encapsulation efficiency higher than $60\%$, and gradual release during storage. The oxidative stability of sunflower oil with or without the incorporation of 250 ppm of free/encapsulated SLE was evaluated during 24 days of storage at 60°C. Peroxide value (PV), thiobarbituric acid value (TBA), oxidative stability index (OSI), color index (CI), and conjugated dienes (CD) were determined. COPM nanoparticles showed the lowest PV, TBA, CI, and CD but both SGUM and WHEY were more effective in delaying oil oxidation than TBHQ and free extract. Higher OSI was observed in oil‐containing nanoparticles with composite coating. Results obtained reinforce the use of whey protein isolate and qodumeh seed gum as a coating for encapsulating SLE to increase the shelf life of sunflower oil as a natural antioxidant.
Nano‐encapsulation is one of the very useful methods for preserving antioxidant compounds, which leads to the controlled release of antioxidant compounds in oil. In our research, Nano‐encapsulation was very effective in preserving and controlling the release of sage extract.
## INTRODUCTION
Sunflower (Helianthus annus) is one of the most important oil crops grown worldwide due to high‐yield oil, and lack of antinutritional factors (Aly et al., 2021; Jafari et al., 2022). Sunflower oil (SFO) is a kind of nutritious vegetable oil that contains more than $85\%$ polyunsaturated fatty acids (PUFA), especially linoleic acid which is used for medical treatment (Meng et al., 2021; Sayyari & Farahmandfar, 2017). The ratio of omega‐3 and omega‐6 fatty acids is prominent for providing cardiovascular and heart health benefits (Aly et al., 2021).
However, due to its fatty acid composition with high PUFA, it is one of the most susceptible to suffering rancidity and oxidation progress. Fat oxidation results in unpleasant flavors, discoloration, changes in texture, nutritional value, shelf life, and appearance of SFO, so synthetic antioxidants such as tert‐butyl hydroquinone (TBHQ), propyl gallate (PG), butylated hydroxytoluene (BHT), and butylated hydroxy anisole (BHA) were used (Razavi & Kenari, 2021). Although synthetic antioxidants are attractive due to their low cost, wide availability, great stability, and effectiveness, their use is limited as they may generate health risks, gastrointestinal tract problem, and cancer risk (Xu et al., 2021). Today, there is growing interest to explore natural antioxidants like plant extracts which provide higher antioxidant activity, and improved sensory properties (Kenari & Razavi, 2022; Wang et al., 2020).
Antioxidant properties of plants are effective in delaying oxidation and rancidity in fats and oils and they have similar activity as chemically synthetic antioxidants (Aly et al., 2021; Wang et al., 2020). Natural extracts from different herbs, such as *Heracleum persicum* (Kenari et al., 2020), *Fumaria parviflora* L. (Razavi & Kenari, 2021), sesame (Esmaeilzadeh Kenari & Razavi, 2022), and *Rosmarinus officinalis* L. (Jafari et al., 2022), are stable for oxidation which is related to the presence of natural phenolic compounds.
Sage (*Salvia officinalis* L.), an evergreen shrub, belongs to the mint family (Labiatae). It is known for its aroma, flavor, and taste. Sage contains a wide array of bioactive compounds like phenolics, terpenoids, and organic acids that have shown antioxidant, antimicrobial, anticancer, and anti‐inflammatory activities (El‐Sayed & Youssef, 2019; Naziruddin et al., 2022). The extraction of bioactive compounds from plant materials with conventional methods such as maceration, shaker, and hydro‐distillation is laborious due to long extraction time, low efficiency, and hazardous solvents (Wrona et al., 2017). Ultrasound‐assisted extraction (UAE) process is a potentially useful technique for the purification and isolation of bioactive compounds. The high‐intensity and high‐frequency sound waves and also their interaction with plant materials distinguish UAE from the conventional methods (Sadat et al., 2021).
The efficiency of plant extracts pertains to biological activities and physicochemical properties. Low stability and water solubility, and the unpleasant taste of plant extract limit their application in food formulation. Encapsulation is a technology for maintaining the biological activities, control release, and bioavailability of bioactive compounds from plant materials which allow their application in different food formulations and preserving their functional properties (Reddy et al., 2022). It also enclosed bioactive compounds from light, oxygen, pH, water, and other adverse conditions (Jamshidi et al., 2020). A range of food‐grade biopolymers is used to create nanoparticles such as polysaccharides, proteins, and a combination of them (Razavi et al., 2021). Seed gums are new and plentiful polysaccharides. The *Lepidium perfoliatum* seed, which is known as Qodumeh Shahri in Iran, produces a high amount of mucilage. It can immobilize and bind a lot of water, and increase the viscosity of foods (Jamshidi et al., 2020). Whey protein isolate is obtained during the production of cheese or casein and it is a by‐product of the dairy industry which is widely used in the food industry because of its functional properties, emulsification, gelatinization, film formation, and solubility in water (Tavares & Noreña, 2019).
Considering that sunflower oil is sensitive to oxidation like other vegetable oils, it is necessary to increase its shelf life by adding natural antioxidants as safe preservatives. The use of extract encapsulation controls the release of antioxidant compounds from the extract during the storage. To the best of our knowledge, studies carried out so far have predominantly focused on using free extracts to increase the shelf life of vegetable oils. Also, no research has been published about the antioxidant activity of the encapsulated sage extract in whey protein isolate and Qodumeh Shahri (Lepidium perfoliatum) seed gum in sunflower oil. Therefore, the present study aimed to evaluate [1] the antioxidant activity of the sage extract, [2] the effect of coating material on the properties of nanocapsules, and [3] the effect of free and nanoencapsulated extract on the extension of oxidative stability of sunflower oil during the accelerated thermal condition.
## Material
The common sage was collected from the local field area near Sari (Mazandaran, Iran) in the summer of 2021. Sunflower oil without antioxidant was purchased from North Agro‐industrial Oil Company. All solvents and chemicals were purchased from Sigma‐Aldrich Company (Sigma). Qodumeh shahri seed gum was purchased from *Reyhan gum* parsian.
## Preparation of sage leaf extract
The leaves of sage were dried immediately after harvesting in a shady place for 1 week and the moisture content was below $10\%$. The dried sage leaves were ground into powder using a mechanical grinder (Habi, Pars‐Khazar). The powder was sieved using a 200‐μm sieve to remove any large pieces. To prepare sage leaf extract, 50 g of sage leaves was mixed with 250 ml of ethanol: water (70:30) solvent. The extraction was done using a ultrasonic bath (6.5l200 H, Dakshin, India) at 35°C for 30 min at a frequency of 35 kHz. The mixture was filtered using Whatman paper No. 1. Then, the solvent was evaporated using a rotary evaporator (RE 120) at 35°C and the final extract was kept at −18°C (Razavi & Kenari, 2021).
## Total phenolic content of sage leaf extract
The total phenolic content (TPC) of sage leaf extract was calculated according to the method reported by Doymaz and Karasu [2018]. Initially, 2.5 ml of Folin–Ciocalteu phenol reagent (0.2 N) was added to 0.5 ml of extract and mixed with 2 ml of Na2CO3 ($7.5\%$). This mixture was kept for 20 min at room temperature in a dark place. After incubation, the absorbance was recorded at 760 nm using a ultraviolet–vis spectrophotometer (Cintra 6, GBS Scientific). The total phenolic content was expressed as a gallic acid calibration curve (Doymaz & Karasu, 2018).
## Determination of antioxidant activity
The antioxidant activity of the extract was determined using 2,2‐diphenyl‐1‐picrylhydrazyl radical scavenging method (DPPH) and ferric reduction antioxidant power (FRAP). Briefly, 0.1 ml of extract and 4.9 ml of DPPH solution (0.1 mM in ethanol) were mixed toughly and held at 25°C for 30 min. Then, the absorbance was recorded at 517 nm and 3 ml of freshly prepared FRAP solution including FeCl3.6H2O (0.02 M in water), TPTZ (0.01 M dissolved in 0.04 M HCL), and acetate buffer (0.3 M, pH = 3.6) at the ratio 1:1:10 was mixed with 10 μl of extract. An increase in absorbance was recorded after 30 min at 593 nm. Antioxidant activity was expressed as mmol/g Trolox (Doymaz & Karasu, 2018).
## Sage leaf extract encapsulation
Whey protein isolate and qodumeh shahri seed gum solution at different ratios (1:0, 1:1, and 0:1) were used as coating materials. Initially, 0.05 g of coating powders was dispersed in deionized water at 30°C and after cooling, mixed overnight to enhance hydration. Then, 10 ml of sage extract was combined with 40 ml of tween 80 and 50 ml of sunflower oil during homogenizing with a magnetic stirrer at 100 rpm for 15 min. After that, the formed emulsion was homogenized again using Ultra‐Turrax homogenizer (IKA Labortechnik) at 15,000 rpm for 10 min followed by adding coating solution to nanoemulsion at a 5:1 ratio (Jafari et al., 2022).
## Properties of encapsulated sage extract
Nanoemulsions were dried using a freeze dryer (SP Scientific) at −50°C and 0.017 mPa for 48 h. The particle size, polydispersity index, and zeta potential of nanoemulsions were measured using a master‐sizer light scattering (Malvern Instrument Ltd.). To evaluate the encapsulation efficiency (EE) of sage extract, 200 mg of different nanoemulsions was mixed with hexane: water: methanol (50:42:8 v/v/v) to destroy the coat of nanocapsules. The surface phenolic content (SPC) and the total phenolic content (TPC) were measured. The EE was calculated using Equation 1: [1] EE%=TPC–SPCTPC×100 The surface morphology of nanoemulsions was examined by SEM (Malvern Instrument Ltd.). Different nanoemulsions were fixed onto double‐sided adhesive carbon tabs mounted on SEM stubs, coated with gold (Kenari et al., 2020).
## Release rate of phenolic compounds
The release rate of phenolic compounds was measured according to the method described by Esmaeilzadeh Kenari et al. [ 2020]. Initially, 20 g of different nanoparticles was poured into separate bottles and kept in an incubator at 60°C for 24 days. Then, 5 ml of phosphate buffer was mixed with 5 g of nanoparticles and centrifuged for 90 min at 1500 g and room temperature. The TPC of the lower phase was determined. The release rate was calculated using Equation 2 (Kenari et al., 2020): Release rate%=100–100×EncapsulatedTPCin the outer phaseEncapsulatedTPCin the inner phase The gradual release rate of phenolic compounds was observed in all samples (Table 2) and differences were significant. There is a positive correlation between size diameter and release rate of phenolic compounds from nanoparticles. This result is in line with the reports of other researchers on the gradual release of phenolic compounds from extracts of Iranian golpar (Kenari et al., 2020), rosemary leaf (Jafari et al., 2022), olive leaf (Mohammadi et al., 2016), and *Ferula persica* into soybean oil (Estakhr et al., 2020).
**TABLE 2**
| Sample | 0 | 4 | 8 | 12 | 16 | 20 | 24 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| SGUM | 5.17 ± 1.1a | 10.21 ± 1.2a | 16.48 ± 2.0a | 25.76 ± 3.5a | 39.91 ± 4.2a | 51.42 ± 5.1a | 66.70 ± 5.3a |
| WHEY | 5.02 ± 0.9b | 8.22 ± 1.0b | 11.35 ± 2.4b | 20.76 ± 1.2b | 28.70 ± 3.2b | 34.91 ± 4.8b | 48.52 ± 2.1b |
| COMP | 4.81 ± 0.8c | 7.45 ± 1.1c | 10.36 ± 1.5c | 17.08 ± 2.5c | 22.19 ± 2.7c | 30.25 ± 2.7c | 43.22 ± 3.5c |
## Oil storage and tests
Free (FREE) and nanoencapsulated sage extract in different seed gum (SGUM), whey protein isolate (WHEY), and complex coatings (COMP) were added to sunflower oil at 250 ppm. Synthetic TBHQ antioxidant (TBHQ) was employed at 100 ppm of concentration to compare the efficiency of sage extract. A control (CONT) sample without antioxidant and other samples were placed in separate bottles and kept in an incubator at 60°C for 24 days. Oil samples were removed periodically every 0, 4, 8, 12, 16, 20, and 24 days for analysis. The release rate of phenolic compounds (Jafari et al., 2022), peroxide value (PV), thiobarbituric acid value (TBA), conjugated dienes (CD) (AOCS, 2009), oxidative stability index (OSI) (Farahmandfar et al., 2018), and color index (CI) were determined (Kenari et al., 2020) every 4 days.
## Statistical analysis
All experiments were performed in triplicate. Experimental data were analyzed using SPSS software (Statistical Program for Social Sciences) version 22. Significant differences ($p \leq .05$) were calculated using Duncan's multiple tests.
## Total phenolic content of sage extract
The total phenolic content (TPC) of sage extract was 31.12 mg GA/g. Nutrizio et al. [ 2020] explored high‐voltage electrical discharge and conventional method for extracting bioactive compounds from sage. They reported 19.67 and 42.13 mg GAE/g for conventional and electrical discharge extraction, respectively (Nutrizio et al., 2020). The TPC of aqueous extract of sage obtained by hot water extraction was 89.65 mg CA/g DW (Kontogianni et al., 2022). A value of 73.7 mg CA/g DW was reported by Kontogianni et al., 2013 for sage extract (Kontogianni et al., 2013). The difference in TPC may be attributed to the extraction time and temperature, type of solvent, extraction method, and variety of sage plants. Hamrouni‐Sellami et al. [ 2013] measured the effect of different drying temperatures on TPC of sage extract. They reported TPC from 0.4 to 2.5 mg GAE/g DW (Hamrouni‐Sellami et al., 2013).
## Antioxidant activity of sage extract
The antioxidant activity of sage extract was determined by the DPPH radical scavenging and FRAP assay. Figure 1a,b presents the antioxidant properties of different concentrations of sage extract. The antioxidant activity of extract was increased by increasing extract concentration. A statistically significant difference was observed between samples in the DPPH method. In the FRAP assay, the concentration of 50 and 100 ppm of extract has no statistically significant difference. Notably, the sage extract at 250 ppm had higher antioxidant activity than TBHQ in both DPPH and FRAP methods. Hamrouni‐Sellami et al. [ 2013] reported higher antioxidant activity of sage extract than BHA, HT, and ascorbic acid in DPPH, FRAP, and β‐carotene assay (Hamrouni‐Sellami et al., 2013) which is in line with the results of our study. The finding of the present study demonstrated that 250 ppm of sage extract could exhibit antioxidant activity equal to synthetic THQ antioxidant. Bigi et al. [ 2021] incorporated the sage extract into biopolymeric chitosan/hydroxypropyl methylcellulose coating and reported antioxidant activity due to the presence of bioactive phenolic compounds such as phenolic and flavonoids (Bigi et al., 2021). The antioxidant activity of sage extract related to presence of carnosol, rosmarinic acid, rosmanol, quinic acid, and carnosic acid (Generalić et al., 2012; Kontogianni et al., 2013; Oudjedi et al., 2019). Kontogianni et al., 2013 reported antioxidant activity for sage extract in both DPPH and FRAP methods which was IC50 = 27.41 μg DW/ml, and 536.81 mg Trolox/DW (Kontogianni et al., 2013). The antioxidant activity of sage extract obtained by electrical discharge, conventional method, and microwave also was reported by other researchers (Generalić et al., 2012; Hamrouni‐Sellami et al., 2013; Nutrizio et al., 2020). Similarly, literature reported a significant increase in both DPPH and FRAP antioxidant activity by an increase in the TPC of extract (Esmaeilzadeh Kenari & Razavi, 2022; Kenari et al., 2020; Razavi & Kenari, 2021).
**FIGURE 1:** *Antioxidant activity of sage extract. (a) DPPH radical scavenging activity, (b) ferric reduction antioxidant power*
## Properties of nanocapsules
The results of the particle size of different nanocapsules are shown in Table 1. All nanocapsules showed a size below 270 nm and a statistically significant difference was observed. The pressure of ultra‐turrax beside sonication energy caused nanosize of particles (Razavi et al., 2020). PDI is among the most important characteristic of nanocarrier systems. PDI of all samples was below 0.300 which indicates the normal distribution of particle size. The zeta potential is helpful to determine the net charge of nanocapsules. Zeta potential of all nanocapsules was negative. It is because of negative nature of whey protein isolate and anionic compounds in seed gum. Tavares and Noreña [2019] reported a negative charge for encapsulated extract in whey protein isolate and chitosan which is due to the negative charge of whey protein isolate (Tavares & Noreña, 2019). The lower zeta potential was observed in nanocapsule prepared using complex coating which attributed to intensifying the negative charge. EE of extract ranged from $61.54\%$ to $74.77\%$. The higher and lower EE was observed in nanocapsule prepared by seed gum followed by whey protein isolate, respectively. The EE higher than $50\%$ was also reported by other researchers (Hosseinialhashemi et al., 2020; Razavi et al., 2020; Rezaei Savadkouhi et al., 2020).
**TABLE 1**
| Sample | Particle size (nm) | PDI | Zeta potential (mV) | EE (%) |
| --- | --- | --- | --- | --- |
| SGUM | 270.0 ± 6.7a | 0.288 ± 0.02c | −35.2 ± 2.1b | 74.77 ± 4.2a |
| WHEY | 255.3 ± 5.4b | 0.294 ± 0.04b | −24.17 ± 1.8a | 61.54 ± 4.0c |
| COMP | 217.4 ± 6.2c | 0.300 ± 0.01a | −41.36 ± 3.6c | 68.16 ± 3.5b |
## Morphology of nanocapsules
The morphological structure of nanocapsules depends on the interactions between the coating components, which affect the final physiochemical properties. The surface morphology of nanocapsules is presented in Figure 2. The surface of all nanocapsules was smooth and did not show cracks, pores, and bubbles. Figure 2c indicates the formation of high compatibility between gum and protein to form wall coating. These surface morphology images confirmed that the sage extract was well encapsulated into the polymer matrix (Esmaeilzadeh Kenari & Razavi, 2022). A similar result was observed in a nanocapsule of Iranian golpar (Kenari et al., 2020), *Fumaria parviflora* (Razavi & Kenari, 2021), rosemary leave (Jafari et al., 2022), and sesame seed extract (Esmaeilzadeh Kenari & Razavi, 2022).
**FIGURE 2:** *SEM images of nanoencapsulated sage extract in different coatings. (a) Whey protein isolate, (b) seed gum, and (c) complex of protein and gum*
## Oil oxidation
Oils with high degree of unsaturation are prone to autooxidation. The simplest test for evaluating the oil autooxidation is PV and TBA. Figure 3a shows the values of PV for each sample in relation to the days of storage at 60°C. In all samples, a continuous increase in PV was observed over time. In the control sample after primary oxidation and maximum PV, a decrease in PV was observed which indicates the stage where the rate of decomposition of peroxide is higher than the rate of peroxide formation. The PV of all samples at the initial time was 1.86 meq/kg. Therefore, the rate of oil oxidation during storage depends on the type of antioxidants being added. The control sample exhibited the highest level of peroxides during storage (76.48 meq/kg) and at 20 days of storage, a decrease in PV was observed. During the storage, sunflower oil containing nanoencapsulated sage extract showed lower PV than oil containing the free sage extract. In other words, the nanoencapsulated extract was more effective to delay the oxidation process than the free extract during the first stage. A similar result was observed by Royshanpour et al. [ 2020] who reported lower PV in soybean oil enriched with nanoencapsulated M. piperita than in free extract (Royshanpour et al., 2020). The control sample exhibited higher PV followed by FREE, TBHQ, SGUM, WHEY, and COMP. In a study conducted by Dauber et al. [ 2022], the antioxidant activity of olive leaf extract in canola oil was measured. A higher PV was observed in the sample without antioxidant and the oil‐containing extract exhibited lower PV due to the presence of phenolic compounds in the extract (Dauber et al., 2022). Hosseinialhashemi et al. [ 2020] stated the higher efficiency of encapsulated Pistacia khinjuk extract than TBHQ on extension of sunflower oil stability (Hosseinialhashemi et al., 2020).
**FIGURE 3:** *Oxidation of sunflower oil during storage. (a) Peroxide value, and (b) thiobarbituric acid value*
TBA value gives a measure of oil oxidation development, in terms of secondary products of oil oxidation. The results of the TBA value of different samples in Figure 3b show an increasing rate in TBA of all samples. The control sample exhibited a higher TBA value followed by FREE, TBHQ, SGUM, WHEY, and COMP. Aleena et al. [ 2020] measured the oxidative stability of sunflower oil in high‐temperature cooking. Their results showed an increase in lipid oxidation during heating (Aleena et al., 2020) which is in accordance with the results of the present study. Binsi et al. [ 2017] increased the oxidative stability of fish oil using sage extract and oil encapsulation. Their results revealed that sage could extend the shelf life of fish oil according to the PV and TBA (Binsi et al., 2017). An increasing trend in the TBA value of plant oil was also reported for soybean oil containing free and nanoencapsulated olive leaf extract (Taghvaei et al., 2014), and potato skin extract in soybean oil (Tavakoli et al., 2021).
The oxidative stability index (OSI) is defined as the point of maximum variation of oil oxidation rate. The results of oxidative stability of sunflower oil samples which were performed at 110°Care illustrated in Figure 4. The OSI of all samples decreased over time. Continued decrease in OSI with an increase in storage time was reported for sunflower oil with pussy willow extract (Sayyari & Farahmandfar, 2017). Control sample showed a lower OSI. Also, oil samples containing encapsulated sage extract exhibited higher OSI, which is related to the antioxidant activity of sage extract. In a study conducted by Upadhyay and Mishra [2015], the sage extract was found to have protective effect on oxidative stability of sunflower oil (Upadhyay & Mishra, 2015). Taghvaei et al. [ 2014] found that the thermal stability of soybean oil containing olive leaf extract in both free and encapsulated forms is higher than blank oil. However, oil containing encapsulated extract showed higher OSI (Taghvaei et al., 2014) which is in accordance with the results of the present study.
**FIGURE 4:** *Oxidative stability of sunflower oil during storage*
Color is considered a vital indicator contributing to the quality of edible oils and consumer preference. The results of color index of different oil samples are presented in Figure 5. The color index of all samples increased over time. At the end of storage time, control sample showed a higher color index. During storage time under thermal conditions, the yellow color of sunflower oil turns dark which is related to the oil oxidation process. Therefore, the oil samples containing TBHA and encapsulated extract showed a lower color index. The decomposition of secondary metabolites to smaller compounds and the formation of polymeric triglycerides was much more in control sample. The colorful compounds present in the sage extract cause higher color index of oil than the control sample on days 0 and 4 of storage. The encapsulation process led to the placement of extract's compounds inside the coating and decreased color index. These results are in accordance with the results reported by Kenari et al. [ 2020], who reported higher color index in soybean oil without antioxidant followed by oil containing TBHQ, nanoencapsulated Iranian golpar extract, and free extract (Kenari et al., 2020). Similar results were reported by Salami et al. [ 2020] for lower color index of canola oil containing TBHQ than oil with pumpkin peel extract (Salami et al., 2020).
**FIGURE 5:** *Color index of sunflower oil during storage*
Another indicator for evaluating oil oxidation is conjugated dienes (CD). These compounds are formed by the rearrangement of double bounds of hydroperoxides during oil oxidation. The results of CD of different oil samples are shown in Figure 6. A continuous increase in CD value was observed in line with the lengthening of the storage period for all samples. Similar to PV, the oil samples containing nanoencapsulated sage extract showed lower CD value. The CD value represents the primary degradation products of oil and confirms the PV content of oil samples. Talón et al. [ 2019] reported lower CD value in sunflower oil containing encapsulated eugenol (Talón et al., 2019). An increasing trend in CD value of plant oil during thermal processing and storage time, and also low CD value of oil containing plant extracts previously reported by other publications (Kenari et al., 2020; Maghsoudlou et al., 2017; Salami et al., 2020; Talón et al., 2019).
**FIGURE 6:** *Conjugated dienes of sunflower oil during storage*
## CONCLUSION
In this study, the antioxidant effect of free and nanoencapsulated sage extract was compared to TBHQ synthetic antioxidant. All coating materials could extend the antioxidant properties of sage extract by controlling gradual release of phenolic compounds, and protecting phenolic compounds from environmental stresses. According to the results of oxidation parameters of oil, the use of complex coating of whey protein isolate and qodumeh shahri seed gum was suggested to encapsulate sage seed gum as a natural antioxidant to extend the shelf life and oxidative stability of sunflower oil.
## CONFLICT OF INTEREST
The authors declare no conflict of interest.
## ETHICS STATEMENT
The study does not involve any human or animal testing.
## DATA AVAILABILITY STATEMENT
Research data are not shared. |
# Seasonal Changes in Midlife Women’S Percentage Body Fat: A 1-Year Cohort Study
## Abstract
### Objective
The purpose of this longitudinal, observational study was to examine whether age and seasonal changes in sedentary activity (sedAct), moderate-to-vigorous physical activity (MVPA), and energy intake (EI) predict changes in body composition among midlife women. We hypothesized that reductions in MVPA and increases in sedAct and EI in winter, along with greater baseline age would predict increases in percentage body fat (%BF) across seasons.
### Design
This study used a longitudinal, within-subjects design. Setting: This study took place in Grand Forks, North Dakota.
### Participants
Participants included 52 midlife women (aged 40-60 years) who were observed over the course of one year.
### Measurements
Percentage body fat measures were obtained via whole body Dual Energy X-ray absorptiometry. Participants were scanned once per season. We measured EI using the ASA24®. We used a GTX3 accelerometer to measure physical activity. Each season, participants wore the monitors for 7 days, 12 hours per day. All measures began in summer.
### Results
Results of hierarchical multiple regression (MR) analyses showed that age increases (β = 0.310, $$p \leq 0.021$$) and summer-to-fall increases in EI (β = 0.427, $$p \leq 0.002$$) predicted seasonal increases in %BF (R2 =.36, F[5, 42]= 4.66, $$p \leq 0.02$$). Changes in MVPA and sedAct were not significant predictors. Repeated measures ANCOVA revealed that summer ($M = 37.7263$, $95\%$ CI [35.8377, 39.6149]) to winter ($M = 38.1463$, $95\%$ CI [36.1983, 40.0942]) increases in %BF are not reversed by spring ($M = 37.8761$, $95\%$ CI [35.9365, 39.8157]).
### Conclusions
To minimize increases in %BF and maintain health, midlife women, particularly older women, should be encouraged to pay extra attention to their diet in the fall months.
## Introduction
Approximately $76\%$ of US women have overweight (body mass index (BMI) ≥ 25) or obesity (BMI ≥ 30; 1). The greatest prevalence of obesity is among women aged 40 years and older, with $43.3\%$ classified as obese [2]. Obesity is associated with greater risk for heart disease, diabetes, and some cancers [3-6]. Women with obesity have a greater risk of type 2 diabetes than men [7], and the risk of diabetes increases with age [8]. As such, understanding factors that contribute to greater rates of obesity in midlife women is crucial.
Age is positively correlated with weight gain in women [9, 10]. This is true even when exercise remains constant [10]. Weight gain is especially prevalent in women who are midlife and of menopausal age [11]; however, changes in body composition (e.g., increases in body fat [12] and visceral fat [13] can occur even in the absences of weight gain [12]. While this may be partly due to biological changes with aging [13], it is important to examine additional predictors that could be exacerbating midlife weight gain.
Weight gain may be more likely during certain seasons due to alterations in usual eating and physical activity. Indeed, diet, physical activity, and body weight change with season [14-16]. Over the course of a year, American adults consumed more energy (kcal) per day in fall relative to spring [14]. Physical activity (PA) differs across seasons as well; PA is lowest [14, 15, 17-19] and sedentary activity (sedAct) is greatest [17, 20]during winter relative to the other seasons. Weather is likely to drive physical activity changes, with conditions such as snowfall [21] and extreme weather [19] being frequently cited as barriers to engaging in physical activity. Another contributing factor to physical activity changes may be daylight; women who experience more than 14 hours of daylight engage in more moderate-to-vigorous physical activity (MVPA) than women who live in an area that is receiving less than 10 hours of daylight [22]. As a result, body weight is greatest in winter [14]. A study of Mexican American women reported similar findings; women gained the most weight in fall [15]. Fall-winter weight gain presents a risk for gradual increases in body weight during adulthood as weight gained is not lost during spring and summer [23].
The purpose of the present secondary analysis of a longitudinal, observational study was to examine whether age and changes in sedAct, MVPA, and energy intake (EI) across seasons predict changes in body composition among midlife women. We hypothesized that reductions in MVPA and increases in sedAct, and EI in winter, along with greater baseline age would predict increases in %BF from summer to spring. As secondary aims, we investigated the changes in each of the predictor variables across seasons. As these data were derived in a location where the spring months can be as intemperate as winter months, we hypothesized that there would be greater levels of EI and sedAct in winter and spring relative to summer and fall. We also hypothesized that MVPA would be lower in winter and spring relative to summer and fall.
## Participants
The study was completed by a total of 52 ambulatory women ranging in age from 40-60 years as previously reported [16, 24-26]. Women were non-overweight, overweight, or obese as classified by BMI ranging from 18-35 kg/m2. Most women were menopausal at the beginning of the study ($$n = 27$$), as measured by follicle-stimulating hormone (FSH) levels of 25.8 mIU/ mL or greater, with 5 additional participants reaching menopausal status by winter. FSH measurement methods have been previously described [25, 26].
Participants were required to have stable weight, defined as fluctuation not exceeding ±4.5 kg for at least 6 months prior to the beginning of the study. Women were excluded from the study if they were smokers, pregnant or lactating, or had health conditions that would limit their physical activity. Furthermore, women who took medications that could potentially influence weight/ appetite were excluded. Participants were asked to refrain from engaging in intentional changes in diet or physical activity while the study was in progress.
Participants were recruited from the Grand Forks, North *Dakota area* through advertising throughout the community. This study was reviewed and approved by the University of North Dakota Institutional Review Board and registered with ClinicalTrials.gov(#NCT01674296). Informed consent was documented prior to the beginning of the study.
## Body Composition
Percentage body fat (%BF) was estimated with whole body Dual Energy X-ray Absorptiometry (DXA, GE Lunar, Madison, WI, enCORE Software Version 13.60.033). The instrument was calibrated before each session using the manufacturer’s calibration phantom. For the 248 calibrations over the 21 months of the study, mean calibration %BF was $60.53\%$ with a standard deviation of 0.01 and a coefficient of variation of $0.016\%$ body fat. All calibration results were within the tolerance limits recommended by the manufacturer. Participants were scanned wearing light clothing or scrubs. Analysis was conducted using iDXA proprietary software.
## Energy Intake
EI, defined as mean calories reported consumed across each season, was derived using the National Cancer Institute’s Automated Self-Administered 24-hour Dietary Recall [27]. The ASA24® is an online measure in which participants self-report the food that they have consumed over the last 24 hours, including all meals, snacks, and drinks. From these data, outcomes such as total intake of energy, carbohydrates, fat, and protein are calculated. Participants completed the ASA24® 36 times throughout the study, each completion spaced approximately 10 days apart.
## Physical Activity
We measured physical activity using a GTX3 accelerometer (ActiGraph Corp., Pensacola, FL USA). Each season, participants wore the monitors at the hip for 7 consecutive days, 12 hours per day. Data were cleaned to remove non-wearing data (i.e., periods during which consecutive zeros were recorded for 20 min). Epochs of 15s were used for data collection. From this, we calculated total minutes of sedAct and MVPA for each season using the Crouter, Kuffel [28] algorithm and Freedson cut-points [29].
## Procedures
The present study had two cohorts; the first began in July of 2012, the second began in July of 2013. Participants visited the Research Center weekly. For the purposes of this study, we defined seasons as summer (June, July, August), fall (September, October, November), winter (December, January, February), and spring (March, April, May). All visits were conducted in the middle month of each season (i.e., July, October, January, and April).
While the visits to the Research Center were otherwise identical, the participants’ very first visit, Day 0 of the summer visit, had two unique components that occurred only at the first visit: [1] participants signed an informed consent document and [2] participants were trained during this visit on how to wear accelerometers.
On Day 0 of each season, participants came to the Research Center after a 12 h fast to complete the DXA body scan, along with other tests including completing a series of online questionnaires. The additional Day 0 tests are described in previous publications [24, 25]. Before leaving the Research Center, participants were given their physical activity monitors and instructed to wear them 12 hours per day for the following 7 days. On Day 8, participants returned the physical activity monitors to the Research Center.
## Statistical Analysis
To determine whether there were changes in %BF across seasons, a multivariate repeated measures analysis of covariance (ANCOVA) was conducted with season as the repeated measure and age as the continuous covariate. Due to a violation of the sphericity assumption of compound symmetry, we used Greenhouse-Geisser corrected p values in the ANCOVA tables. For pairwise multiple comparisons of least squares means of age by season, we used Tukey-Kramer adjusted p values. We calculated difference scores for %BF, EI, sedAct, and MVPA between summer and fall, winter, and spring. We then mean-centered these scores for regression analyses. A significance level of ∝ = 0.05 was chosen a priori to determine significant p values. We used SAS 9.4 TS1M7 for these analyses.
We used 3 hierarchical multiple regression/correlation (MRC) models to predict summer to spring Δ%BF. In each model, age was placed in Step 1 to serve as the variable to be controlled because it was a salient predictor of Δ%BF, and all other predictors were placed in Step 2. Model 1 measured summer-fall difference scores for sedAct, MVPA, and EI. Model 2 measured summer-winter difference scores for sedAct, MVPA, and EI. Model 3 measured summer-spring difference scores for sedAct, MVPA, and EI. As secondary analyses, we used repeated measures ANOVA to investigate whether there were seasonal changes in sedAct and MVPA; one person was excluded from this analysis due to incomplete data. We used bivariate correlational analyses to further assess the relationships between the predictors and Δ%BF across seasons, as well as FSH and %BF/Δ%BF. We used SPSS Version 27.0 for these analyses.
## Primary Outcomes
There was a significant season by age interaction for %BF, F(1.93, 94.53) = 4.09, $$p \leq .021.$$ No pairwise comparisons were statistically significant. Most ($60\%$) participants experienced increases in %BF from summer ($M = 37.7263$, $95\%$ CI [35.8377, 39.6149]) to winter ($M = 38.1463$, $95\%$ CI [36.1983, 40.0942]). Summer to winter increases in %BF (ΔM = 0.4200, adjusted $95\%$ CI [-0.1669, 1.0069]) were not reversed by spring (ΔM = -0.2702, adjusted $95\%$ CI [-0.7754, 0.2350]). These %BF increases were not reversed by spring in $30\%$ of the sample that were younger than the mean age of 49 years (i.e., 3 of the 10 people younger than 49 years who gained %BF,), whereas $37\%$ the women who were older than 49 years (i.e., 7 of the 19 people over the age of 49 who gained %BF from summer to winter) did not reverse increases in %BF by the following spring. Age in the between-subjects table yielded a test statistic of F[1, 49] =3.77, $$p \leq .058.$$ See Table 1 for means and standard deviations and Table 2 for frequencies of season-to-season changes for primary predictors.
Hierarchical MRC model 1 predicted Δ%BF, R2 =.25, F[4, 47] = 3.89, $$p \leq .008.$$ Age was a significant predictor in Step 1, R2 =.12, F[1, 50] = 6.90, $$p \leq .011.$$ Adding seasonal changes in sedAct, MVPA, and EI from summer to fall [ΔR2 =.13, Fchange[3, 47] = 2.66, $$p \leq .059$$] did not increase the model’s ability to predict Δ%BF. In the context of the full model, increases in age and increases in EI had significant unique contributions in predicting increased %BF in spring. See Table 3 for coefficients.
Model 2 also predicted Δ%BF, R2 =.26, F[4, 46] = 3.93, $$p \leq .008.$$ Age was a significant predictor in Step 1, R2 =.14, F[1, 49] = 7.99, $$p \leq .007.$$ However, adding seasonal changes in sedAct, MVPA, and EI from summer to winter did not improve the model, ΔR2 =.12, Fchange[3, 46] = 2.36, $$p \leq .084.$$ In the context of the full model, age was the only significant predictor of Δ%BF. See Table 3 for coefficients.
**Table 3**
| Unnamed: 0 | Step | Variables | B | β | t | p | pr2 | sr2 | 95% CI for B |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Summer to Fall Changes | Summer to Fall Changes | Summer to Fall Changes | Summer to Fall Changes | Summer to Fall Changes | Summer to Fall Changes | Summer to Fall Changes | Summer to Fall Changes | Summer to Fall Changes | Summer to Fall Changes |
| 1 | | Age | .128 | .348 | 2.63 | .011* | .348 | .348 | .030, .226 |
| 2 | | Age | .101 | .274 | 2.11 | .040* | .294 | .267 | .005, .197 |
| | | sedAct | .002 | .067 | 0.46 | .650 | .066 | .058 | -.008, .012. |
| | | MVPA | -.014 | -.111 | 0.76 | .454 | -.110 | -.096 | -.051, .023 |
| | | EI | .002 | .335 | 2.62 | .012* | .357 | .331 | .001, .004 |
| Summer to Winter Changes | Summer to Winter Changes | Summer to Winter Changes | Summer to Winter Changes | Summer to Winter Changes | Summer to Winter Changes | Summer to Winter Changes | Summer to Winter Changes | Summer to Winter Changes | Summer to Winter Changes |
| 1 | | Age | .134 | .374 | 2.83 | .007** | .374 | .374 | .039, .229 |
| 2 | | Age | .119 | .332 | 2.43 | .019* | .337 | .309 | .020, .218 |
| | | sedAct | .008 | .269 | 1.86 | .069 | .265 | .237 | -.001, .017 |
| | | MVPA | .009 | .058 | 0.38 | .705 | .056 | .048 | -.037, .054 |
| | | EI | .002 | .231 | 1.78 | .082 | .254 | .257 | .000, .004 |
| Summer to Spring Changes | Summer to Spring Changes | Summer to Spring Changes | Summer to Spring Changes | Summer to Spring Changes | Summer to Spring Changes | Summer to Spring Changes | Summer to Spring Changes | Summer to Spring Changes | Summer to Spring Changes |
| 1 | | Age | .128 | .348 | 2.62 | .011* | .348 | .348 | .030, .226 |
| 2 | | Age | .121 | .328 | 2.45 | .018* | .337 | .325 | .022, .219 |
| | | sedAct | .000 | .006 | 0.37 | .970 | .005 | .005 | -.009, .010 |
| | | MVPA | -.014 | -.107 | 0.73 | .469 | -.106 | -.097 | -.052, .024 |
| | | EI | .001 | .194 | 1.37 | .176 | .196 | .182 | -.001, .003 |
Model 3 did not yield a better fit than the reduced model (age as only predictor) for Δ%BF, R2 =.18, F[4, 47] = 2.51, $$p \leq .054.$$ Though age was a significant predictor in Step 1, R2 =.12, F[1, 50] = 6.90, $$p \leq .011$$, adding seasonal changes in sedAct, MVPA, and EI from summer to spring did not significantly improve the model, ΔR2 =.10, Fchange[3, 47] = 1.05, $$p \leq .382.$$ See Table 3 for coefficients.
## Secondary Outcomes
Though they were not found to predict Δ%BF, we investigated whether there were seasonal changes in sedAct and MVPA. We found that sedAct changed across seasons [F(2.55, 127.62) = 5.48, $$p \leq .003$$], increasing during winter relative to summer, F[1, 50] = 9.56, $$p \leq .003.$$ Likewise, we found changes in MVPA across seasons [F[3, 150] = 4.74, $$p \leq .004$$]; MVPA in winter was lower than summer MVPA, F[1, 50] = 15.52, $p \leq .001.$
Results of bivariate correlational analyses revealed that summer-fall increases in EI were associated with sustained increases in %BF into both Winter [[50] =.37, $$p \leq .009$$] and spring, r[50] =.37, $$p \leq .007.$$ SedAct in summer, fall, and winter were positively correlated with %BF in spring (ps <.01). There was no relationship between %BF and FSH levels seasonally, nor in the change in %BF and change in FSH levels across seasons.
## Discussion
This one-year, longitudinal study examined predictors of changes in %BF of midlife women from summer to the following spring. While we hypothesized that age and changes in EI, sedAct, and MVPA would all predict %BF changes, the most important predictors of increases in %BF from summer to spring were greater age and increases in EI from summer to fall. The magnitude of yearly increases in %BF were greater in the older women, consistent with past research [9, 10]. Likewise, we found that greater EI in fall relative to summer was associated with greater increases in %BF, again consistent with past research [14, 15, 17-19]. These findings suggest that while %BF increases with age, it is exacerbated by greater EI, especially during the fall. Notably, for the group as a whole there were no increases in EI from summer to fall, yet those women who reported the greatest increases in EI from summer to fall had the greatest increases in %BF during this period. [ 30, 31]Humans may experience similar circannual weight gain patterns to hibernating mammals; storing fat in fall seasons in preparation for food shortage in winter [30, 31]. As such, excess EI in fall may be more likely to be stored as excess body fat. To our knowledge, there has not yet been work done to study this hypothesis. Though the mechanism behind the relationship between EI and %BF remains unclear, these data suggest that limiting seasonal increases in EI, especially in older women, during the fall is important because not all of the weight gained may be lost the following spring leading to the gradual weight gain with age [23].
Though not significant predictors of body composition changes, secondary analyses showed that sedAct and MVPA differed across seasons, with sedAct being greatest and MVPA being lowest in winter relative to summer. This is consistent with past research that shows people have the lowest PA and greatest sedentary behavior during winter months [14, 15, 17-20]. Given the health concerns of sedentary activity (e.g., increased risk of cardiovascular disease, cancer, and diabetes; 32) and health benefits of physical activity independent of weight management (e.g., greater cardiorespiratory fitness is associated with decreased risk of mortality regardless of BMI; 33) health benefits would likely occur by limiting increases in sedAct and increasing MVPA during winter months. Contrary to our hypothesis seasonal changes in MVPA did not predict changes in %BF. Depending on the season, the women in our study engaged in 37 to 44 min/week of MVPA, far below the recommended amount of 150 min/week [34]. The small magnitude of absolute MVPA limited the ability for energy expenditure to have an impact on %BF of the women in our study and the finite range of seasonal change in MVPA reduced its ability to predict change in %BF.
Strengths of the current study include the use of accelerometers to measure PA, DEXA to assess the primary outcome, %BF, and collecting three dietary recalls per month to estimate usual EI during each season. Retention was high, with only 2 women dropping out, and compliance was high for completion of PA and EI study tasks.
This study has limitations as well. The study lasted 9 months, and while it represents all four seasons over the course of one year, it is unknown whether participants would have modified EI or PA or reduced %BF if followed through a second summer. The sample is small and specialized regarding age and gender, which limits generalizability but does provide evidence for a group at risk of gaining excess adipose tissue [12, 13]. The study is also limited in that the sample was from northern North Dakota, USA, an area which has great changes in both weather and sunlight throughout the year, and therefore not applicable to areas with long, hot summers and cool winters, or places where it is temperate year-round.
## Conclusions
Overall, the results of this study suggest that as women age, attention should be given to achieving or maintaining appropriate energy intake and exercise during the fall and winter months to reduce increases in %BF. Limiting increases in sedentary behavior and energy intake during the fall and winter may help women reduce seasonal increases in %BF.
## Ethical Standards
This study complies with current laws of the country in which it was performed. This study received IRB approval.
## Competing Interests
The authors (AMN, SLC, LJ, DGP, & JNR) have no competing interests to report. This work was supported by the U.S. Department of Agriculture, Agricultural Research Services #5450-51530-057-00D. The U.S. Department of Agriculture prohibits discrimination in all its programs and activities on the basis of race, color, national origin, age, disability, and where applicable, sex, marital status, familial status, parental status, religion, sexual orientation, genetic information, political beliefs, reprisal, or because all or part of an individual’s income is derived from any public assistance program. ( Not all prohibited bases apply to all programs.) Persons with disabilities who require alternative means for communication of program information (Braille, large print, audiotape, etc.) should contact USDA’s TARGET Center at [202] 720-2600 (voice and TDD). To file a complaint of discrimination, write to USDA, Director, Office of Civil Rights, 1400 Independence Avenue, S.W., Washington, D.C. 20250-9410, or call [800] 795-3272 (voice) or [202] 720-6382 (TDD). USDA is an equal opportunity provider and employer. |
# GC–MS analysis and pharmacological evaluations of Phoenix sylvestris (Roxb.) seeds provide new insights into the management of oxidative stress and hyperglycemia
## Abstract
Phoenix sylvestris Roxb. ( Arecaceae) seeds are used in the treatment of diabetes in the traditional system of medicine. The present study evaluated antihyperglycemic and antioxidant activities as well as the total phenolic and flavonoid content of the methanol extract of P. sylvestris seeds (MEPS). The constituents of the extract were identified by GC–MS analysis. MEPS demonstrated strong antioxidant activity against 2,2‐diphenyl‐1‐picrylhydrazyl (DPPH) (IC50 = 162.70 ± 14.99 μg) and nitric oxide (NO) (IC50 = 101.56 ± 9.46 μg/ml) free radicals. It also possesses a substantial amount of phenolics and flavonoids. It significantly ($p \leq .05$) reduced blood glucose levels in glucose‐loaded and alloxan‐induced diabetic mice at the doses of 150 and 300 mg/kg b.w., respectively. A total of 46 compounds were detected and identified by gas chromatography–mass spectroscopy (GC–MS) analysis, among which 8‐methylisoquinoline N‐oxide ($32.82\%$) was predominant. The phytochemical study by GC–MS revealed that the MEPS possesses compounds which could be related to its antidiabetic and antioxidant activities. To recapitulate, P. sylvestris seeds can be a very good option for antidiabetic and antioxidant activity though further studies are still recommended to figure out the responsible phytochemicals and establish their exact mechanism of action.
Phoenix sylvestris seeds are used in the treatment of diabetes in traditional system of medicine. The present study evaluated antihyperglycemic and antioxidant activities as well as total phenolic and flavonoid content of methanol extract of P. sylvestris seeds. ( MEPS). The constituents of the extract were also identified by GC–MS analysis.
## INTRODUCTION
Diabetes mellitus is the metabolic syndrome of the human body manifested by chronic hyperglycemia along with impaired metabolism of carbohydrates, protein, and fats due to diminished insulin secretion and/or action (Alam et al., 2022; Nayak & Roberts, 2006). Chronic hyperglycemia exacerbates the antioxidant action by increasing oxidative stress and reactive oxygen species (ROS) in islets of the pancreas (Savu et al., 2012). Furthermore, it has been reported that diabetes is responsible for the excess generation of free radicals due to the reduction of antioxidant levels in the body (Ali & Agha, 2009). Multiple antihyperglycemic agents along with insulin are currently available in the market, but they are not devoid of significant undesirable side effects (Pari & Saravanan, 2004). Recently, the use of plants and plant materials has attracted the attention of researchers for the development of new antihyperglycemic due to their promising efficacy and limited toxicity (Rates, 2001). In addition, antioxidants derived from plants have been shown to play important roles in improving diabetes‐associated disorders (Rahimi et al., 2005).
Phoenix slylvestris (L.) Roxb., a plant of the palm family Arecaceae, is commonly known as “Khejur” in Bangladesh. The plant seeds have been reported to be bacteriostatic against Gram‐positive and Gram‐negative organisms (Kothari, 2011). They are used in the treatment of dysentery, ague, and diabetes in the traditional medicine system (Beg & Singh, 2015; Ghani, 1998). Although traditional use advocates the use of P. sylvestris as a candidate for treating diabetes, no scientific report exists to corroborate this claim. Therefore, the present study aimed to determine the antioxidant action, total phenolic and flavonoid contents, and antihyperglycemic activity of seeds of P. sylvestris for the first time. The constituents of seed extract have also been identified by gas chromatography‐mass spectroscopic (GC–MS) analysis so that future researchers can find a nifty clue to identify responsible phytochemicals from the plant seeds to discover and develop novel therapeutics against diabetes and oxidative stress.
## Plant materials and extraction
The fully matured fruits of P. sylvestris were collected from Akabpur, Mainamati, Comilla, Bangladesh in July 2013. The fruits were identified by the authorities of Bangladesh National Herbarium, Mirpur, Dhaka, Bangladesh, and a voucher specimen has been deposited (accession no: DACB: 38499) for future reference. The seeds of P. sylvestris were separated from the fruits, dried, and ground to a coarse powder using a mechanical grinder. About 500 g of powdered seeds was mixed with 1200 ml of methanol (MeOH). The mixture was occasionally stirred and kept at 25 ± 2°C for 72 h. The extract was then filtered through the Whatman filter paper, number 41. The solvent was removed by using a rotary evaporator under reduced pressure at 40°C temperature and 50 rpm. Finally, 12.4 g ($2.48\%$ yield) concentrated extract was obtained, which was used for phytochemical and biological studies.
## Chemicals and drugs
Chemicals and reagents used in this study were ‐ MeOH, 1,1‐diphenyl‐2‐picrylhydrazyl (DPPH), Griess reagent, quercetin, gallic acid, ascorbic acid, pentobarbital sodium (Sigma Co.), sodium carbonate (Na2CO3), Na‐K tartrate, aluminum chloride (AlCl3), Folin–Ciocalteu's reagent (Merck Co.), alloxan monohydrate (Loba Chemie Pvt. Ltd.). Metformin hydrochloride was obtained as a gift sample from Square Pharmaceuticals Ltd.
## Ethical statements
The protocols for the current study were endorsed by the Ethics Committee of Stamford University Bangladesh (SUB/IAEC/13.05). The animals were treated according to the guidelines provided by The Swiss Academy of Medical Sciences and Swiss Academy of Sciences. After the experiments, animals were euthanized using pentobarbital sodium following the AVMA Guiding Principles for the Euthanasia of animals: 2013 edition. Necessary steps were taken to minimize animal suffering.
## Preliminary screening
MEPS was qualitatively screened for the detection of carbohydrates, reducing sugars, steroids, alkaloids, proteins, saponins, tannins, and flavonoids following the standard procedures (Ghani, 1998).
## GC–MS (gas chromatography–mass spectroscopy) analysis
GC–MS analysis of the MeOH extract of P. sylvestris seeds was performed using Agilent 7890A (Agilent Technologies) capillary gas chromatograph interfaced to a 5975C inert XL EI/CI triple‐axis mass detector. The gas chromatograph was equipped with an HP‐5MSI fused capillary column of $5\%$ phenyl, $95\%$ dimethyl‐poly‐siloxane (film: 0.25 μm, length: 90 m, and diameter: 0.250 mm). The parameters of GC were programmed as follows: inlet temperature: 250°C; oven temperature; 90°C at 0 min raised to 200°C for 2 min (3°C/min) then 280°C for 2 min (15°C/min); carrier gas (Helium) flow rate: 1.1 ml/min; auxiliary temperature: 280°C. Total retention time for the chromatographic analysis was 46 min. The MS parameters were set as follows: quad temperature: 150°C; source temperature: 230°C; mode: scan mode; mass range: 50–550 m/z. The “NIST‐MS Library” was used for mass spectra analysis and identification of compounds. The relative percentage of separated compounds was determined from the peak areas of the total ionic chromatogram.
## Determination of total phenolic content (TPC)
The total phenolics present in the MEPS were quantified using Folin–Ciocalteu's reagent (Singleton et al., 1999). An aliquot (0.5 ml) of Folin–Ciocalteu's reagent was taken and mixed with 1 ml of (200 μg/ mL) MEPS. After 5 min, 4 ml of $7.5\%$ (w/v) Na2CO3 prepared in distilled water was added to the mixture. The solution was mixed well and incubated at 20°C for 1 h. The absorbance was measured at 765 nm using DR 5000™ (Hach) spectrophotometer. A calibration curve ($y = 0.0086$x + 0.2546, R 2 = 0.9998) of gallic acid was prepared using solutions of varying concentrations ranging from 25 to 400 mg/L. Then, the amount of total phenolics present in the extract was measured in gallic acid equivalents (GAE) using the formula: A = (C × V)/m, where, A is the total amount of phenolics equivalent to gallic acid present in the extract, C is the concentration of gallic acid (mg/ml) measured from the calibration curve, V is the extract volume (ml) and m denotes extract weight (g). The process was conducted in triplicate, and the mean value of TPC was determined.
## Determination of total flavonoid content (TFC)
A solution (1 ml) of extract (200 μg/ml) was taken in a test tube, and 2 ml of MeOH was added to it. Then the solution was mixed well with 0.1 ml of $10\%$ of aluminum chloride (w/v, prepared in distilled water) followed by 1 M of Na‐K tartrate, 2.8 ml of distilled water, and incubated at 25°C. After 30 min, the absorbance of the mixture was measured at 415 nm (Selim et al., 2014). The calibration curve of quercetin ($y = 0.0178$x + 0.6152, R 2 = 0.9975) was prepared by measuring the absorbance of its different concentrations (25–400 mg/L). Then, the total flavonoid content of the extract was calculated using the standard calibration curve and expressed as mg of flavonoid present per gm of extract equivalent to quercetin. The experiment was conducted three times, and the mean value of flavonoid content was calculated.
## DPPH free radical scavenging capacity assay
The effect of MEPS on free radicals was determined by analyzing its scavenging effect on stable 1,1‐diphenyl‐2‐picrylhydrazyl (DPPH) free radicals. The plant extract or standard drug (ascorbic acid) was prepared at a concentration ranging from 400 to 1.5625 μg/ml in MeOH. A 0.1 mM solution of DPPH in MeOH was prepared, and 2 ml of this solution was added to 2 ml of the test solution. The mixture was mixed properly and incubated for 30 min at room temperature in a dark place. The absorbances for the standard and experimental solutions were measured against blank (without test sample or drug) DPPH solution using a spectrophotometer at 517 nm (Wang et al., 2013). The scavenging of DPPH free radicals was expressed as a percentage of inhibition was determined from the following equation: %inhibition=absorbance of blank−absorbance of test sampleabsorbance of blank×100 then, IC50 value was calculated from % inhibition vs log concentration curve.
## Nitric oxide (NO) scavenging capacity assay
Exactly 4 ml of MEPS or standard (ascorbic acid) solution at the concentration 400–1.5625 μg/ml in methanol was taken in different test tubes. Then, 1.0 ml of sodium nitroprusside (5 mM) was added to the samples and incubated for 2 h at 30°C. After incubation, 2 ml solution was taken, and 1.2 ml Griess reagent ($1\%$ sulfanilamide, $0.1\%$ napthylene diamine dihydrochloride in $2\%$ H3PO4) was added to it. The absorbances for the standard and test solutions were measured against blank using a spectrophotometer at 550 nm (Alisi & Onyeze, 2008). The percentage of inhibition was calculated as described earlier in DPPH free radical scavenging assay, and the IC50 value was calculated.
## Study animals
Swiss albino mice of either sex, weighing 25–30 g, 6–8 weeks, were used for the antihyperglycemic study. They were procured from the International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR, B) and housed in appropriate cages with wood flakes bedding. The mice were allowed to acclimatize for 2 weeks in standard laboratory conditions and were maintained at 25°C ± 2°C temperature, $55\%$–$60\%$ relative humidity, and 12 h light/dark cycle. They had access to water and feed ad libitum. The feed was formulated by authorities of ICCDR,B. The animals were randomly divided into five groups (normal control, diabetic control, and three experimental groups), each group consisting of five mice ($$n = 5$$). The normal and diabetic control groups received oral treatment of vehicle (physiological saline). The positive control and experimental groups were orally treated (p.o) with metformin and MEPS, respectively. The experimental mice starved from feed for 12 h but had free access to water before experiments. The tests were performed between 9.00 a.m. and 5.00 p.m., and the investigators had no information about the experimental groups.
## Acute toxicity test
The acute toxic effect of MEPS on animals was assessed before studying the antihyperglycemic activity. Experimental animals were divided into four experimental and one control group ($$n = 5$$). The experimental animals were orally treated with MEPS at the doses of 500, 1000, 2000, and 3000 mg/kg b.w. Control group animals received physiological saline only. Animals were housed and adequately provided with ICCDR,B formulated food and water ad libitum. They were carefully observed for 72 h after administration of MEPS, and any adverse reactions (skin rashes, swelling, itching), behavioral changes, and mortality were documented (Walker et al., 2008).
## Oral glucose tolerance test (OGTT)
The mice of control, diabetic control, standard drug treatment (positive control), and MEPS treatments (experimental groups) fasted overnight. The blood samples of each animal were collected from the tail vein, and glucose level was measured using Accu‐Chek® (Roche) one‐touch glucometer as baseline (0 min). Then animals of diabetic control, positive control, and experimental groups received vehicle (10 ml/kg b.w.), metformin (60 mg/kg b.w.), and MEPS (50, 150, 300 mg/kg b.w.), respectively. Primarily, antihyperglycemic activities were evaluated with the lower doses (50 mg/kg b.w.) of MEPS and the dose was randomly selected based on observing the effect of P. sylvestris fruits in the previous study (Shajib et al., 2015). The higher dose limit (300 mg/kg b.w.) was selected based on the significant glucose‐lowering effect of MEPS. After 30 min, each group of mice received $10\%$ glucose solution at the dose of 2 gm/kg b.w. Then, blood glucose level was measured at 30, 60, 90, and 120 min following glucose treatment (Chaturvedi et al., 2004).
## Assay for alloxan‐induced diabetes
The experimental mice were randomly divided into control, diabetic control, standard drug treatment (positive control), and MEPS treatments (experimental groups). Positive control and experimental group animals were induced with diabetes by intraperitoneal (i.p.) injection of alloxan‐monohydrate at the dose of 60 mg/kg b.w. The blood glucose level was measured before alloxan treatment. The glucose levels were monitored every day after alloxan treatment. Alloxan induces type 1 or insulin‐dependent diabetes (Macdonald Ighodaro et al., 2017). In fasting conditions, blood glucose level of more than 7 mmol/L is indicative of diabetes (Adeyi et al., 2015; Mathew & Tadi, 2021; Njogu et al., 2016). After 3 days of alloxan administration, fasted mice with blood sugar levels ≥8 mmol/L were considered diabetic (Ezeja et al., 2015). The sustained hyperglycemia of the alloxan‐induced diabetic mice was observed for the next 5 days and selected for the study. Alloxan may increase blood glucose levels by more than 11 mmol/L in consecutive days after administration (Macdonald Ighodaro et al., 2017; Njogu et al., 2016). However, the time required to reach the blood glucose level can vary on the alloxan administration route, dose, and experimental animal species (Hansen et al., 2007; Kim et al., 2006; Lips et al., 1988; Njogu et al., 2016). The hyperglycemic mice received vehicle (10 ml/kg b.w.), metformin (60 mg/kg b.w.), or MEPS (50, 150, and 300 mg/kg b.w.). Blood samples were collected from the tail vein of each group of mice, and glucose level was measured at 0 h (as baseline), 4, 8, and 24 h following treatments (Semwal et al., 2010).
## Statistical analysis
All the experimental data were presented as mean ± SEM (standard error of the mean). IC50 values were determined by utilizing GraphPad Prism 6.01 (GraphPad Software, Inc.). The comparison of different groups against the control group was performed by one‐way analysis of variance (ANOVA) followed by Dunnett's test as the post hoc test using SPSS 22 (IBM) software. $p \leq .05$ was set as the level of statistical significance.
## Phytochemical analysis
Preliminary screening for different phytochemical groups reveals that the plant seed contained alkaloids, steroids, carbohydrates, proteins, flavonoids, and tannins. The most abundant compound revealed by the GC–MS analysis of the extract was 8‐methylisoquinoline N‐oxide ($32.82\%$). Other major constituents were as follows: methyl oleate ($12.19\%$), methyl linoleate ($7.44\%$), dodecanoic acid, methyl ester ($5.59\%$), palmitic acid, methyl ester ($4.62\%$), 9‐octadecenoic acid (Z)‐,2,3‐dihydroxypropyl ester ($3.11\%$), 5,8‐dimethyl‐1,4‐dihydro‐1,4‐methanonaphthalene ($2.93\%$), tetradecanoic acid, methyl ester ($2.88\%$), alpha‐bisabolol ($2.41\%$),linalool ($1.69\%$), (+)‐(4 S, 8R)‐8‐epi‐beta‐bisabolol ($1.59\%$), 1‐fluoro‐4‐acetylbenzene ($1.56\%$), methyl stearate ($1.41\%$), 11‐eicosenoic acid, methyl ester ($1.39\%$), and alpha‐bisabolol oxide B ($1.15\%$). The identified compounds, peak area (%), and retention time (min) of MEPS by GC–MS analysis are presented in Table 1. The total ionic chromatograph of the methanol extract of P. sylvestris seed is shown in Figure 1.
## Antioxidant activity
Quantitative analysis of the crude extract demonstrated that there are 91.32 ± 5.20 mg total phenolics equivalent to gallic acid and 21.99 ± 4.70 mg total flavonoids equivalent to quercetin present in per gram extract. The anti‐radical activity of MEPS against DPPH and NO was found to have IC50 values of 162.70 ± 14.99 and 101.56 ± 9.46 μg/ml, respectively. Standard drug ascorbic acid demonstrated IC50 values of 8.71 ± 0.02 and 7.39 ± 0.43 μg/ml, respectively. The highest percent inhibition of DPPH radical exhibited by MEPS was 61.67 ± 1.74 at the maximum experimental concentration (400 μg/ml). Ascorbic acid inhibited DPPH radical by 96.41 ± $0.00\%$ (Figure 2). MEPS and ascorbic acid displayed a maximum of 68.20 ± 1.00 and 96.78 ± $0.38\%$ nitric oxide (NO) scavenging activity at higher concentrations, respectively (Figure 3). The results show that MEPS is capable of arresting the free radicals generated by DPPH and NO, which are harmful to human health (Hasan et al., 2009). It has been reported that plant phenolics and flavonoids may exert significant antioxidant activities (Rice‐Evans et al., 1997; Saija et al., 1995). The presence of a considerable amount of phenolics and flavonoids in MEPS can be attributed to its strong antioxidant activity.
**FIGURE 2:** *DPPH (2,2‐diphenyl‐1‐picrylhydrazyl) free radicals scavenging activity of ascorbic acid (standard) and MEPS* **FIGURE 3:** *Nitric oxide (NO) free radicals scavenging activity of ascorbic acid (standard) and MEPS*
## Acute toxicity
Oral administration of MEPS up to 3000 mg/kg did not cause any adverse reactions, behavioral changes, or mortality during the observational period. This suggests that MEPS possesses a low toxicity profile (LD50 > 3000 mg/kg b.w.). The doses of the MEPS for antihyperglycemic studies were selected from trial experiments. The observations from the acute toxicity study indicate that the experimental doses of MEPS selected for the study were safe.
## Oral glucose tolerance
Oral glucose tolerance test (OGTT) measures the ability to utilize sugars by the body and is commonly performed to evaluate pre‐diabetes, post‐diabetes, and gestational diabetes (Hartling et al., 2012; Ziegler et al., 2009). The additional glucose load causes the excess plasma glucose level, characterized as hyperglycemia and early clinical manifestation of diabetes. Fasted mice showed glucose levels below 5.5 mmol/L, which was in the normal range (Andrikopoulos et al., 2008). After 30 min of oral glucose treatment, the plasma glucose level was significantly increased in mice and then gradually declined throughout the observation period. Oral treatment of MEPS and the standard drug metformin caused a marked reduction of the elevated blood glucose level in OGTT (Figure 4). The result was significant over the observation period (30–120 min) for both metformin (60 mg/kg b.w.) and MEPS at the doses of 150 and 300 mg/kg b.w. The rate of plasma glucose level reduction of MEPS was dose dependent. The result indicates that MEPS may exert protective action against the hyperglycemic condition of diabetes mellitus.
**FIGURE 4:** *Effect of MEPS and metformin in oral glucose tolerance test. Data are expressed as mean ± SEM (n = 5). MEPS = methanol extract of P. sylvestris seeds. **p < .001 and *p < .05, compared to diabetic control group (Dunnett's test)*
## Alloxan‐induced diabetes
Oral ingestion of MEPS (150, 300 mg/kg b.w.) and standard drug metformin (60 mg/kg b.w.) exhibited significant ($p \leq .001$) antihyperglycemic effect in alloxan‐induced diabetic mice throughout the experimental period as shown in Table 2. Intraperitoneal treatment of alloxan (60 mg/kg b.w.) caused marked increases in glucose levels in the mice compared to the vehicle treatment group. The blood sugar level was steady at different measurement times from 0 to 24 h by nearly15 mmol/l for the alloxan‐induced diabetic control mice. The standard drug metformin significantly reduced the blood glucose level after 4 h of treatment (10 mg/kg b.w., p.o). The glucose level of alloxan‐induced diabetic mice also started to decline significantly following 4 h of oral treatment of MEPS at lower doses (50 mg/kg) compared to the diabetic control mice. However, the antihyperglycemic effect of MEPS was noticeably different from the metformin‐treated diabetic mice. The glucose‐lowering effect of MEPS was highest at the maximum dose (300 mg/kg) after 24 h of oral treatment.
**TABLE 2**
| Group | Treatment | Blood glucose level (mmol/L) | Blood glucose level (mmol/L).1 | Blood glucose level (mmol/L).2 | Blood glucose level (mmol/L).3 |
| --- | --- | --- | --- | --- | --- |
| Group | Treatment | 0 h | 4 h | 8 h | 24 h |
| Control | Vehicle (10 ml/kg) | 5.25 ± 0.45 | 5.44 ± 0.39 | 5.19 ± 0.41 | 5.31 ± 0.45 |
| Diabetic control | Vehicle (10 ml/kg) | 15.48 ± 0.39 | 15.04 ± 0.46 | 14.97 ± 0.59 | 15.46 ± 0.19 |
| Positive control | Metformin (10 mg/kg) | 15.15 ± 0.59 | 6.88 ± 0.30* | 4.97 ± 0.07* | 4.46 ± 0.16* |
| Experimental 1 | MEPS (50 mg/kg) | 15.14 ± 0.42 | 13.83 ± 0.30* | 13.73 ± 0.35 | 13.07 ± 0.43* |
| Experimental 2 | MEPS (150 mg/kg) | 15.33 ± 0.34 | 12.47 ± 0.37* | 12.09 ± 0.35* | 11.08 ± 0.33* |
| Experimental 3 | MEPS (300 mg/kg) | 15.85 ± 0.23 | 11.54 ± 0.36* | 10.76 ± 0.21* | 9.76 ± 0.28* |
## DISCUSSION
Plants are gifts of nature housed thousands of important biochemical playing major roles in the regulation and maintenance of body's homeostasis (Alam et al., 2020, 2021; Islam et al., 2022). The present study investigates the antihyperglycemic activities of crude extract of P. sylvestris seed (MEPS) and the rationale for its use in diabetes as claimed in traditional medicine. The plant P. sylvestris is grown in the wild and cultivated in different regions of southeast Asia, including Bangladesh (Lamia & Mukti, 2021). The plant is also economically valued for its multiple households, industrial purposes, and nutritional and medicinal significance in Bangladesh (Chowdhury et al., 2008; Lamia & Mukti, 2021). Previous studies reported that the plant seeds are enriched with antioxidants (Kothari et al., 2012) and protective oil (Qidwai et al., 2018). Recently published literature demonstrated that the alcohol extract seed of Phoenix dactilyfera, a native date palm of the Arecaceae family, possess promising free radical scavenging and reduced blood glucose in diabetic rats (Abiola et al., 2018). The current study reveals the phytochemicals possibly responsible for oxidative radical scavenging capacity and antihyperglycemic activities of P. sylvestris seed extensively grown in Bangladesh.
The plant extracts and compounds with profound antioxidant capacity could be promising candidates for the management of recovery of oxidative stress‐induced diseases such as diabetes (Ashrafi et al., 2022; Sultana et al., 2022; Vinayagam et al., 2016). The phenolics and flavonoids are the significant phytochemicals evidenced to remarkably restore oxidative damage by scavenging free radicals produced in diabetic patients (Emon et al., 2020, 2021; Sarian et al., 2017; Vinayagam et al., 2016). Pre‐clinical studies showed that plant phenolics could elevate plasma insulin levels and increase glucose uptake by accelerating hepatic glycolysis, glucogenesis, and gluconeogenesis (Chakrabarty et al., 2022; Rudra et al., 2020; Vinayagam et al., 2016). The antioxidant defense mechanism of flavonoids involves the mitigation of reactive oxidative species‐induced endothelial cell damage and endoplasmic reticulum stress responsible for impaired insulin and hyperglycemia (Sarian et al., 2017). The presence of a substantial amount of total phenolic and flavonoid contents and the prominent antioxidant capacity of the crude extract of P. sylvestris seed has been evidenced in the recently published literatures (Kothari et al., 2012; Qidwai et al., 2018). However, it was noticeable that the phytochemical contents varied with the extraction methods (Kothari et al., 2012; Qidwai et al., 2018). The variability could also be responsible for the geographical, ecological, and botanical conditions and harvesting times. The result of the present study indicates P. sylvestris seed (MEPS) grown in Bangladesh contains substantial amounts of phenolics and flavonoids. The results showed that the scavenging of DPPH and NO free radicals by MEPS was also noticeable. Furthermore, several antioxidant compounds, including nerolidol (Neto et al., 2013), citronellol (Jagdale et al., 2015), and phytol (Santos et al., 2013) were identified from the GC–MS analysis of MEPS. The substantial retention of phenolics, flavonoid compounds, and promising free radicals detaining capacity of MEPS further encouraged to proceed the investigation of its effect against oxidative stress‐related hyperglycemia.
Hyperglycemia and fluctuation of blood glucose levels are critical pathological indicators of the development and progression of diabetes (Mathew & Tadi, 2021). Oral glucose tolerance test primarily indicates the impairment of glucose tolerance indicates insulin resistance and associated problems of carbohydrate metabolism (Andrikopoulos et al., 2008). The test is also commonly performed to evaluate the glucose tolerance improvement capability of drug candidates or plant extracts before assessment into the additional diabetic model (Abiola et al., 2018; Dauki et al., 2022; Sornalakshmi et al., 2016). In glucose‐ingested non‐diabetic mice, MEPS treatment showed a significant reduction in plasma glucose level. The result indicates that MEPS could be effective for the improvement of metabolic uptake of glucose and re‐establish the normal blood glucose level. To justify the enhancement of glucose tolerance in diabetic‐associated condition, MEPS was further challenged in alloxan‐induced diabetic mice. Alloxan selectively causes damage to a large number of pancreatic beta cells, inhibiting the sensitivity of pancreatic glucokinase enzyme, which results in reduced insulin release and glucose uptake by the tissues. Therefore, the glucose level of blood is significantly raised, and the consequence is characterized as hyperglycemia (Saravanan & Pari, 2005). Besides, alloxan administration induces excessive generation of free radicals such as reactive oxygen species (ROS) by the activation of hydroperoxides, and lipid peroxidation system, which leads to pancreatic tissue injury as well as promotes the pathogenic consequences of diabetes (Halliwell & Gutteridge, 2015; Sabu & Kuttan, 2004). Both mechanisms of alloxan action lead to a pathological state of type 1‐like diabetes or insulin‐dependent diabetes in cells (Macdonald Ighodaro et al., 2017). The significant decrease in blood level by the MEPS (Table 2) indicates that it remarkably alleviated the hyperglycemic effect produced by alloxan. Its antioxidant potential may play a pivotal role in the effects. The presence of antidiabetic agent linalool (More et al., 2014) as well as antioxidant compounds nerolidol (Neto et al., 2013), citronellol (Jagdale et al., 2015) and phytol (Santos et al., 2013) in MEPS (Table 1) further supports the outcome of the study.
## CONCLUSION
The present study revealed that the methanol extract of P. sylvestris (MEPS) possesses strong antioxidant and antihyperglycemic activities. Quantitative analysis of MEPS indicated that it contains a considerable amount of phenolics and flavonoids. Besides, MEPS showed potent scavenging activity against the free radicals generated by DPPH and NO. MEPS significantly reduced the hyperglycemic effect induced by glucose and alloxan. This effect could be associated with its antioxidant action as well as the presence of the bioactive compounds, which were confirmed by GC–MS analysis. Therefore, further studies on the isolation as well as analysis of the biological activities of the isolated compounds, are required. The results of the present study indicate that P. sylvestris seed could be a potential natural source for developing antidiabetic compounds.
## FUNDING INFORMATION
The investigation was partially done in the Molecular Pharmacology and Herbal Drug Research Laboratory, which was established through financial support from the Higher Education Quality Enhancement Project (HEQEP), AIF, Round‐III, Window‐2, CP‐3258, University Grants Commission (UGC) of Bangladesh.
## CONFLICT OF INTEREST
No potential conflict of interest was reported by the authors.
## DATA AVAILABILITY STATEMENT
All analyzed data during this research are included in the published manuscript. *The* generated datasets during this research are not publicly available, although they can be provided from the corresponding author upon reasonable request. |
# Fibrosis of Peritoneal Membrane, Molecular Indicators of Aging and Frailty Unveil Vulnerable Patients in Long-Term Peritoneal Dialysis
## Abstract
Peritoneal membrane status, clinical data and aging-related molecules were investigated as predictors of long-term peritoneal dialysis (PD) outcomes. A 5-year prospective study was conducted with the following endpoints: (a) PD failure and time until PD failure, (b) major cardiovascular event (MACE) and time until MACE. A total of 58 incident patients with peritoneal biopsy at study baseline were included. Peritoneal membrane histomorphology and aging-related indicators were assessed before the start of PD and investigated as predictors of study endpoints. Fibrosis of the peritoneal membrane was associated with MACE occurrence and earlier MACE, but not with the patient or membrane survival. Serum α-Klotho bellow 742 pg/mL was related to the submesothelial thickness of the peritoneal membrane. This cutoff stratified the patients according to the risk of MACE and time until MACE. Uremic levels of galectin-3 were associated with PD failure and time until PD failure. This work unveils peritoneal membrane fibrosis as a window to the vulnerability of the cardiovascular system, whose mechanisms and links to biological aging need to be better investigated. Galectin-3 and α-Klotho are putative tools to tailor patient management in this home-based renal replacement therapy.
## 1. Introduction
Peritoneal dialysis (PD) is a home-based modality of renal replacement therapy and a good option for patients with chronic kidney disease (CKD). Independently of the chronological age, this population commonly presents an accelerated aging process affecting skeletal, immune, renal, and cardiovascular systems [1]. Therefore, the risk of mortality in CKD patients is increased by 10- to 20-fold in comparison to individuals with normal renal function [2]. Moreover, cardiovascular toxicity caused by uremia represents a major factor for the increased mortality in dialysis programs [3].
The dialytic capacity of the peritoneal membrane is pivotal in PD, but the integrity of the membrane in the uremic patient might have been overlooked. The existence of a high person-to-person variability in the status of the membrane before the start of PD was recently reported and related to the anti-aging molecule α-Klotho [4].
Deficiency of α-*Klotho is* well known to be involved in damage of the cardiovascular system, atherosclerosis, skin atrophy and osteoporosis, traits commonly associated with human aging [5,6]. Those traits also overlap with the manifestations of CKD [7], suggesting that α-Klotho might be an important player in PD outcomes. α-*Klotho is* changed in uremia [8], which is recognized as a disbalance between protective and harmful molecules [9]. In uremia, the concentrations of proteins associated with mechanisms underlying early aging can be affected, such as those related to inflammation and fibrosis in multiple organs/tissues [10].
In fact, α-Klotho was shown to be a uremic molecule implicated in the vulnerability of the peritoneal membrane, expressed as submesothelial fibrosis [4]. As more vulnerable peritoneal membranes were associated with low circulating α-Klotho, we herein hypothesized that α-Klotho might represent a multifaceted marker of both the survival of the membrane and the survival of the patient. Therefore, we conducted a prospective longitudinal observational study in a cohort of incident PD patients to investigate the impact of uremic toxins related to aging, peritoneal membrane status and the patient’s frailty in long-term PD outcomes.
## 2.1. Baseline Characterization of Study Population
This observational prospective cohort study included 58 patients, followed for 60 months. A total of $31\%$ were female. At baseline, patients were 56 (30–79) years old with a median renal residual function assessed by rGFR of 7 (4–10) mL/min/1.73 m2. The underlying renal diseases were diabetic renal disease ($20\%$), chronic glomerulonephritis ($20\%$), hypertensive nephrosclerosis ($23\%$), autosomal dominant polycystic kidney disease ($11\%$) and chronic pyelonephritis ($10\%$). Twenty-two patients had fibrosis of the peritoneal membrane at the baseline of the study.
Concerning dialysis parameters, 2 and 32 patients were fast and average-fast transporters, respectively, and $94\%$ of patients had good efficacy of dialysis.
Regarding therapeutics, patients with atherosclerosis artery diseases ($40\%$) were treated with the highest tolerated dose of statins and antiplatelet therapy. In addition, all patients were on inhibitors of renin–angiotensin axis (IECA or ARA) and diuretic therapy. Eighteen patients ($31\%$) were on spironolactone, which was mainly added in those with fibrosis of the peritoneal membrane before the start of PD. A total of 12 patients were on beta-blockers and $30\%$ were on other antihypertensive drugs. The number of patients in treatment for mineral bone disease was low.
The normalized protein catabolic rate (nPCR) was 0.99 (0.79–1.09) g/Kg/day and $47\%$ had proper nutrition. A total of 10 patients were vulnerable and 5 were frail according to the Edmonton scale.
The baseline variables of the study were analyzed according to the biopsy score of the membrane (Table 1), which considers submesothelial compact zone thickness (STM), vasculopathy and inflammation [4]: Score 0 represents no fibrosis, no vasculopathy, nor inflammation; Score 1: no fibrosis, but vasculopathy and/or inflammation; Score 2: fibrosis with/without vasculopathy and/or inflammatory changes.
Overall, at baseline, patients with membrane fibrosis received more spironolactone, antiplatelet and statins therapy. In the S2 group (fibrosis), more than half of the patients had peripheral arterial disease (PAD). While the biopsy score was not related to the age of the patients, the cutoff for the level of circulating α-Klotho (anti-aging molecule) that discriminated the existence of peritoneal membrane fibrosis before the start of PD was defined by performing a ROC curve (AUC = 0.860, $$p \leq 4$$ × 10−6). This cutoff was established at 742 pg/mL, with $83\%$ sensitivity (to detect fibrosis) and $71\%$ specificity (to detect no fibrosis).
## 2.2. Impact of the Status of the Peritoneal Membrane and Age-Related Indicators in PD-Related Outcomes
Regarding the long-term outcomes of the study, the minimum time on PD was 13 months and the median time was 42 (30–58) months. Technical failure during the follow-up period occurred in $41\%$ of patients, with a median time until failure of 40 (26–56) months.
A total of 27 patients ($47\%$) had a MACE during the study, with a minimum time for MACE of 8 months and a median time of 17 (12–31) months.
Next, we investigated the relation of study outcomes with the status of the membrane (biopsy score, STM, α-Klotho levels with a 742 pg/mL cutoff as a surrogate of fibrosis) and the age-related baseline indicators (age, serum biomarkers, frailty).
## 2.3. Status of Peritoneal Membrane, Age-Related Indicators and Technical Failure of Peritoneal Dialysis
Contrary to our initial hypothesis, the status of the membrane was not associated with technical failure (Table 2). Overall, the patients with PD failure, compared to those without, were older, had higher frailty scores, were more likely to be on calcium channel blockers (Table 2) and presented higher circulating galectin-3 at the study baseline. The use of icodextrin solutions, glucose applied, or diabetes were not associated with failure (Table 2) or time to PD failure (Table 3).
In addition, the galectin-3 was also related to the time until PD failure (Table 3). A cut-off of galectin-3 to discriminate PD failure was established at 8.88 ng/mL (sensitivity = $92\%$ and specificity of $46\%$), which was also associated with the survival of the peritoneal membrane (Figure 1A). This cut-off was independently associated with PD failure in an adjusted model to age, PAD, and calcium channel blockers (CCB) (Figure 1B), wherein age, frailty score and icodextrin did not account for the prediction of time to PD failure.
## 2.4. Peritoneal Membrane, Age-Related Indicators and Major Cardiovascular Event
While not related to PD failure, the presence of membrane fibrosis at the study baseline was associated with occurrence of MACE (Table 2) and time to MACE (Table 3, Figure 2A). Both endpoints were also related to age, frailty score, arterial atherosclerotic disease, use of statins, nPCR, beta-blockers and Kt/v (Table 2 and Table 3).
The existence of fibrosis in the peritoneal membrane at the study baseline was independently associated with time to MACE in an adjusted model to age, nutritional status and PAD (Figure 2B), wherein the frailty score or heart failure did not account for the prediction of the time to PD failure. This multivariate association was maintained when the membrane status was inferred by the non-invasive surrogate α-Klotho, using the identified cut-off for α-Klotho of 742 pg/mL instead of biopsy score (Figure 2C). The association of time until the occurrence of MACE with atherosclerotic disease might be inferred by the use of antiplatelet therapy (Figure 2D), maintaining α-Klotho cutoff as an independent factor in the model, together with age and antiplatelet use at the study baseline. The estimated survival probability for time to MACE discriminated by α-Klotho levels in an adjusted model to age, frailty, nPCR, rGFR and use of antiplatelet drugs is represented in Figure 2E.
Overall, our results suggest a link between the vulnerability of a patient’s cardiovascular system and the status of the peritoneal membrane. In addition to age, lower α-Klotho and PAD were also predictors of cardiovascular risk over time in different multivariate models.
## 2.5. Peritoneal Membrane, Age-Related Indicators and All-Causes Mortality
A total of six deaths occurred during the study, five related to cardiovascular disease and 1 to malignancy, which were not related to the biopsy score of the peritoneal membrane. Cardiovascular mortality and survivor groups had similar age, scores of biopsies and frailty as well as similar levels of serum biomarkers.
## 3. Discussion
Our data provides new information about the links between the peritoneal membrane, uremia and PD outcomes. We found that blood levels of galectin-3 represent a putative tool to identify patients at higher risk of PD failure. In addition, and contrary to our initial hypothesis, the baseline membrane fibrosis was not a predictor of technical failure, time to failure or all-causes mortality in PD. Instead, the status of the peritoneal membrane was related to MACE and time until occurrence of MACE, which can be inferred by circulating α-Klotho.
The rationale for the choice of the pre-PD molecules was driven by the hypothesis that prematurely aged phenotypes of the peritoneal membrane could be associated with poorer long-term PD outcomes. These phenotypes are difficult to predict only from demographic characteristics, but could be favored by a uremic toxic environment, patients’ frailty and aging. Therefore, a group of aging-related indicators was investigated as predictors of PD outcomes. PD outcomes were associated with uremic molecules, but not with the frailty test applied. This test was chosen due to its simplicity and ease for daily clinical practice and validation in Portuguese [11,12].
The person-to-person variability in membrane status and functions, even before the start of PD [4], is likely to be driven by genetic and non-genetic factors [13,14,15,16]. The latter includes exposure to glucose, peritonitis, loss of residual renal function, inflammation and uremia [4,17,18]. In this context, better knowledge about aging-related uremic molecules might fulfill clinicians’ aims for accessible risk stratification tools for tailored prescriptions. Foreseeing a proof of concept that uremia-related mechanisms impact both membrane and patient survival, we selected a panel of proteins reported to be associated with aging, inflammation and fibrosis in other organs/tissues [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33].
We found that the status of the membrane (evaluated by histomorphology, STM and by a surrogate α-Klotho cutoff) was not associated with changes in the functions of the peritoneal transport. Moreover, the pre-PD membrane status was not predictive of long-term survival of both peritoneal membrane and patients.
As α-*Klotho is* associated with fibrosis of the peritoneal membrane [4], the absence of association between α-Klotho and PD failure was an unexpected finding. While it did not consider fibrosis, a previous study about the arteriolar structure concluded that membrane arteriolar frailty in CKD stage 5 patients follows with cardiovascular system damage [34]. Therefore, as α-*Klotho is* associated with arteriosclerosis and aging, our results might suggest that the peritoneal biopsy score reflects a vascular vulnerability more than the integrity of the membrane. This novel and overlooked dimension might account for the shared mechanisms of persistent uremic phenotype, premature aging, and fibrosis of different tissues. In fact, the membrane might not represent a risk factor but a marker of a particular cardiovascular vulnerability profile.
Substantial cardiovascular risk persists in CKD patients, despite the treatment of established cardiovascular risk factors such as arterial hypertension and dyslipidemia. Knowledge about the uremia profiles that might be predictors of these risks will pave the way for personalized interventions. Moreover, this knowledge aligns with the need for novel drugs to control the unbalanced status of protective and deleterious molecules that constitutes uremia.
Our data might support that even older, frail, at higher cardiovascular risk and/or with a worsened status of peritoneal membrane patients might take advantage of this home-based modality of renal replacement therapy because we did not find any association between frailty or peritoneal membrane status and mortality or survival in the technique. Attention must be paid to the combination of atherosclerotic arterial disease, namely PAD and low α-Klotho levels. α-*Klotho is* an anti-aging molecule that exerts beneficial effects on the endothelium [35]. Moreover, α-Klotho-deficient mice show increased vascular calcification [36,37], further supporting a beneficial cardiovascular role of α-Klotho and putative relevance of recombinant α-Klotho to control the burden of comorbidities in PD patients.
Differently from α-Klotho, there was a clear association with galectin-3 and PD failure. Galectin-3, which is secreted by macrophages, has been associated with an inflammatory and fibrotic phenotype [19,25,26,27,28,29]. Moreover, Béllon et al. [ 2011] showed that alternative activated macrophages or M2 phenotypes were present in the peritoneal effluent drained from patients, and were able to stimulate fibroblast proliferation and the loss of peritoneal function [30].
α-Klotho and galectin-3 share common characteristics, e.g., both are uremic toxins and have been related to fibrosis and inflammation. However, unlike α-Klotho, galectin-3 was associated with PD failure. Therefore, the differences found in PD outcomes between poor α-*Klotho versus* rich galectin-3 uremic profiles suggest different underlying mechanisms. Moreover, while low baseline α-Klotho was highly associated with cardiovascular disease, such an association was not found for galectin-3 (Table 2). Instead, our data indicated galectin-3 as a predictor of earlier PD failure. Further studies are necessary to validate this data, but a putative explanation for the galectin-3 result is that this molecule is a high-affinity binding protein for advanced glycation products [38] whose relation to poor membrane efficiency and survival is well accepted [39,40]. Of note, inhibitors of galectin-3 are currently investigated in clinical trials [41,42,43], although in areas other than PD.
Our study has several strengths. All biomarker measurements were performed in the same laboratory to ensure measurement consistency across the pooled cohort, and we analyzed an anatomical territory with fibrosis and achieved a long follow-up period.
However, the study has some limitations. Firstly, the strict inclusion criteria from a single PD center implied that a rather small sample was studied; serum biomarkers were only measured at baseline, which might have hampered finding associations with time-dependent outcomes (PD and MACE). Secondly, other parameters of adequacy such as nutrition and volemia were neglected in our research, which may have influence data analysis and affect our prediction of long-term outcomes of patients. Moreover, our data only focused on clinical examinations and basic personal information, not including environmental conditions such as psychosocial and economic dimensions, which can affect their clinical outcomes.
Further research might focus on the putative role of galectin-3 and α-Klotho as tools to tailor patient management in this home-based renal replacement therapy.
## 4.1. Study Design and Participants
This was a single center, prospective study with 60 months of follow-up that included incident patients at the PD Unit of Santa Cruz Hospital, Centro Hospitalar de Lisboa Ocidental, Portugal. The study was approved by the Ethics Committee of the NOVA Medical school, Faculdade de Ciências Médicas, NOVA University of Lisbon (Approval number $\frac{50}{2019}$). The study was conducted according to the Declaration of Helsinki and Good Clinical Practices and complied with the European Union GDPR Legislation.
At the enrollment, patients were referred from the Nephrology consultation inside or outside the hospital for an information consultation. Patients were enrolled in a consecutive manner. The main purposes of the consultation were to assess the eligibility criteria for the renal substitution technique and to provide information to allow an informed choice. This consultation included a multidisciplinary team composed of a doctor, nurse, nutritionist, and social worker. Inclusion criteria were being over 18 years of age and having a stable clinical condition, defined by the absence of serious abdominal infections (diverticulitis, pancreatitis and cholecystitis) or active neoplasia, and on PD with biopsy of the peritoneal membrane. The exclusion criteria were having had previous aggressions to the peritoneal membrane (such as surgeries or peritonitis).
Non-autonomous patients were included for assisted PD whenever there was a caretaker. All patients signed informed consent.
## 4.2. PD Prescription
PD was started within 30 (21–44) days after the implantation of the catheter. All patients started on continuous ambulatory peritoneal dialysis (CAPD) and after the first year, $30\%$ of patients switched to automated peritoneal dialysis (APD) and remained stable over the observation period. All patients were treated with dialysis solutions with a reduced content of glucose degradation products and a normal pH (Baxter®, Deerfield, MA, USA and Fresenius®, Bad Homburg, Germany). At baseline, no hypertonic solutions were used and a total of $38\%$ of the patients received polyglucose. The major reasons to include polyglucose in the prescription was hydration status ($41\%$), diabetes and the presence of basal peritoneal membrane fibrosis ($30\%$). Prescriptions of amino-acid-containing solutions for PD were exclusive for diabetic patients. The daily quantity of administered glucose was maintained for CAPD, but increased in APD patients over time to achieve adequate ultrafiltration and fluid balance.
## 4.3. Baseline Variables
The following variables were assessed at study baseline Peritoneal and renal Kt/V urea and creatinine clearances, glomerular filtration rate (GFR), body surface area (BSA), and protein catabolic rate were calculated using Patient onLine (POL) software version 6.3 (Fresenius®, Bad Homburg, Germany). These variables were investigated as factors with impact on PD outcomes. All patients were followed up until death, PD drop-out, or 30 June 2019.
## 4.4. Study Outcomes
The primary outcomes were PD-related outcomes:-PD technique failure refers to ultrafiltration failure, peritonitis, or dialysis inefficacy. Patients were considered with no technical failure when achieving 60 months of follow-up.-Time for technique failure is the time on PD of each patient in the study until technical failure. Participants dropping PD out for reasons other than technical failure (switching to hemodialysis by option, kidney transplantation, transference to other PD centers or loss to follow-up) were censored.
The secondary outcomes were Cardiovascular Outcomes:(a) All-cause mortality(b) Major Cardiovascular event-Major Cardiovascular event (MACE) after 3 months on PD. MACEs were defined according to validated clinical criteria and included coronary heart disease (CHD), congestive heart failure (HF), acute myocardial infarction (AMI), acute cerebral infarction (ACI) and cardiac death caused by AMI, arrhythmias or HF. CHD was defined as ≥$50\%$ diameter stenosis of coronary arteries by either coronary angiography or CT angiography [48]. HF was diagnosed according to ESC guidelines for the diagnosis and treatment of chronic heart failure [49]. AMI was diagnosed according to ESC guidelines for the management of acute coronary syndromes [50]. ACI was defined as an acute neurological event lasting more than 24 h associated with clinical evidence of ischemic focus of the brain [51]. Cardiac death was defined as death caused by AMI, arrhythmias or CHF.-Time for MACE was defined for each patient as the time in the study until a MACE. Censored data were defined for those dropping out of the study without MACE or those achieving the end of the study without MACE.
## 4.5. Statistical Analyses
Categorical variables are presented as absolute (n) and relative frequencies (%); continuous non-normally distributed data are expressed as median (interquartile range). The Kruskal–Wallis test was used to assess differences between three or more independent groups. The Mann–Whitney U-test was used to assess differences between two independent groups. Potential associations between categorical data were analyzed using the Chi-Squared test. ROC curves were also used to identify cut-offs for potential blood biomarkers. Multiple Cox proportional hazards regression models were performed to assess potential predictors of survival, technique survival and the time to the occurrence of a cardiovascular event. The proportional hazards assumption was assessed through the Schoenfeld residual plots. All models were fit using the ‘survival’ R package [52,53].
The optimal cutpoints were obtained through the maximally selected rank statistics method (see [54] for more details) using the ‘maxstat’ R package [55] and considering the time until PD failure with the censor variable, indicating whether the patient suffered a technique failure or not.
Survival curves were generated using the Kaplan–Meier technique and tested using the log-rank test.
A confidence level alpha of 0.05 was considered throughout the study. Statistical analyses were performed with the R software, R version 4.2.0 and SPSS. |
# Safety evaluation of Balanced Health Care Dan—A medicinal formulation containing traditional edible ingredients in lung tumor‐loaded mice
## Abstract
Chinese formulation‐based medicinal food has been widely used in clinical trials, but its safety is not well studied. In this research, the edible safety assessment of Balanced Health Care Dan—a formulation containing traditional edible ingredients that were initially formulated to reduce side effects for lung cancer patients—was studied in mice based on biochemical and gut microbial analyses. The experimental mice were subcutaneously loaded with lung tumor A549 cells and then administrated with Balanced Health Care Dan (200 mg/kg or 400 mg/kg b.w. in gavage feeding) for 4 weeks. The body weight, blood parameters, and pathogenic phenotype in tissues were examined. No toxicological symptom was found in experimental mice compared with the normal control. Comprehensive analyses were also conducted to evaluate intestinal microbiota that are associated with many diseases. Balanced Health Care Dan modified the gut microbiota structure in a positive way. In conclusion, the Chinese formulation‐based medicinal food has shown no toxicological effect in mice within 4 weeks of feeding experiment and has the potential to be used in clinical trials.
Chinese formulation‐based medicinal food has been widely used in clinical trials, but its safety is not well studied. The current research evaluated a medicinal food for cancer patients using biochemical and gut microbial analyses to support the use of such a formula.
## INTRODUCTION
Chinese formulation‐based medicinal food has made great progress in clinical uses. There are increasing varieties of traditional Chinese formulations used in cancer treatment, but the safety of these formulations has not been systematically evaluated.
Tumors are seriously threatening human life and health, leading to increased mortality and morbidity worldwide (Ferlay et al., 2021; Siegel et al., 2019). Among all diseases, cancer has the second highest death rate, only next to cardiovascular and cerebrovascular diseases (Siegel et al., 2018; Zhang et al., 2021). Lung cancer is a malignant tumor characterized by a rapid proliferation rate, less survivability, and high mortality. Lung cancer has the highest mortality rate among all cancers (Su et al., 2021). Surgery, radiotherapy, and chemotherapy are the most common clinical treatment strategies (Couzin‐Frankel, 2013; Ma et al., 2019). These treatments have greatly improved the prognosis of lung cancer patients, but also have brought about many side effects (Wang et al., 2020, 2021). These side effects greatly reduce the quality of life of the patients during treatment.
Traditional medicinal food (TMF) offers a promising option to reduce the side effects during cancer treatment (Luo et al., 2019; Zhang et al., 2022). “ Balanced Health Care Dan” is a formula that is designed to improve patient's quality of life, and decrease chemotherapy‐induced adverse effects. Although most of the TMFs are botanical and have been traditionally considered to be nontoxic, the ingredients of TMF are generally complex with certain substances that might cause additive toxicities (Lin et al., 2018; Zhang & Yuan, 2012). Therefore, edible safety evaluation and clinical studies of TMFs are equally important. Animal‐based toxicity evaluation is necessary before the clinical application of these TMFs added to the diet of cancer patients (Shen et al., 2014; Wang, 2015). The current research focuses on the edible safety evaluation of “Balanced Health Care Dan,” which can be considered a model for traditional formula‐based medicinal food.
## Animals and cells
Totally, 72 BALB/C Nude mice (36 males and 36 females) of 5‐week‐old with an average body weight of 40–60 g were purchased from Vital River Laboratory Animal Technology Co., Ltd. The animal room was maintained at 23 ± 2°C, with relative humidity of 50 ± $5\%$. A 12 h light/dark cycle was provided by automated fluorescent illumination. All mice were provided with their diet and water ad libitum. The animal studies were approved by the Animal Care and Use Committee at China Agricultural University and all experiments were performed in accordance with relevant guidelines and regulations (Approval Number: AW02110202‐4).
A549 lung cancer cells (ATCC) were cultured in a carbon dioxide cell incubator at 37°C. The medium was Dulbecco's modified *Eagle medium* basal medium, complemented with $10\%$ fetal bovine serum and 100 U/ml penicillin and streptomycin. The cells were digested with $0.25\%$ trypsin and passaged on alternate days.
## Traditional formula ingredients
The Balanced Health Care Dan was prepared using the following ingredients: Dangshen, Astragalus membranaceus, Atractylodes macrocephala, white lentil, Ligusticum chuanxiong, bezoar, musk, Rhodiola, Platycodon grandiflorum, mulberry bark, licorice, Poria cocos, wood incense, Sichuan pepper, aloes, polygonatum, purple Ganoderma lucidum, Hedyotis diffusa, Dendrobii Officmalis Caulis, pollen, and honey. Each ingredient was washed, dried, disinfected, and ground into very fine powder. The powder was then mixed with condensed honey followed by pellet making for serving. Currently, the formulation is in the application for patent protection (Application No. CN20180374829.6).
## Mouse lung cancer tumor modeling
Each mouse was subcutaneously inoculated with A549 cells (5 × 106 cells/mouse) on the right back. The animals were used for the experiment when the average tumor volume of the group reached over 100 mm3. The successful model mice were randomly divided into groups as follows: Control model mice were gavaged with sterile water, while low‐dose group and high‐dose groups of mice were gavaged with 200 or 400 mg/kg of Balanced Health Care Dan solution for 4 weeks, respectively. The doses used were physiologically relevant to that administered to humans. The treatment was carried out in six consecutive days per week. The daily body weight of the mice was recorded.
## Blood biochemistry
At the end of the experiment, mice were fasted for 12 h before blood collection. Blood samples were collected from the inner canthus under anesthesia. The serum samples were analyzed for alkaline phosphatase (ALP), aspartate aminotransferase (AST), alanine aminotransferase (ALT), albumin (ALB), creatinine (CREA), urea, triglycerides (TG), and cholesterol (CHO) as previously reported (He et al., 2020).
## Pathology examination
The fresh tissues of the tumor, liver, kidney, spleen, and lung of mice were fixed with $4\%$ paraformaldehyde for 24–48 h. The tissues were dehydrated, waxed, embedded, and sliced. Subsequent HE staining was performed and then examined microscopically by a professional staff.
## Fecal metagenomic analysis
The intestinal contents of mice were collected in sterilized tubes and frozen at −80°C. DNA was then extracted from the intestinal contents according to the instructions in the kit (FDA6512, Beijing Ford Press Technology Co., LTD). The 16S rDNA sequencing and data analysis were performed as reported (He et al., 2020; Xu et al., 2020). Briefly, The V3‐V4 region of the 16S rDNA was amplified by PCR with specific primers linked to the barcode. Thermal cycling consisted of initial denaturation at 98°C for 1 min, followed by 30 cycles of denaturation at 98°C for 10 s, annealing at 50°C for 30 s, and elongation at 72°C for 30 s. Finally, 72°C for 5 min. Sequencing libraries were generated using TruSeq® DNA PCR‐Free Sample Preparation Kit (Illumina) following the manufacturer's recommendations, and index codes were added. The library quality was assessed on the Qubit@2.0 Fluorometer (Thermo Scientific) and Agilent Bioanalyzer 2100 system. At last, the library was sequenced on an Illumina NovaSeq platform. After data filtering, UPARSE software (uparsev7.0.1001) was used to cluster valid data into Operational Taxonomic Units. Mothur method and SILVA138 (http://www.arb‐silva.de/)'s SSUrRNA database were used for species annotation analysis. Qiime software (Version 1.9.1) was used to analyze diversity. R software (Version 2.15.3) was used for PCA analysis. LEfSe software was used for LEfSe analysis, and the filter value of LDA Score was 4 by default (Segata et al., 2011).
## Statistical analysis
The experimental data were presented as mean value ± standard deviation, and the data were analyzed by GraphPad Prism 8. A one‐way analysis of variance (ANOVA) was applied with Dunnet‐1 post hoc analysis. Differences between values were considered statistically significant at *$p \leq .05$, and extremely significant at **$p \leq .01.$
## Tumor incidence
A total of 72 mice (36 female and 36 male mice) were used in this experiment, among which 54 mice (27 males and 27 females) were successfully loaded with tumors that met the requirement and were included in the follow‐up experiment. The 54 mice were randomly divided into six groups (3 male groups and 3 female groups; 9 mice/group) following a computerized randomization scheme based on body weight. Four weeks treatment with Balanced Health Care Dan did not significantly affect the body weight of these mice (Figure 1a).
**FIGURE 1:** *Body weight of male and female mice during treatment. (a) Male mice; (b) female mice. CK, control group; Low, low‐dose group; High, high‐dose group*
## Analysis of clinical appearance
During the 4‐week experiment, no treatment‐related adverse effects in the clinical appearance of the animals were observed. The body weight of male and female mice in the experimental groups was comparable with that of the control group on day 7, day 14, day 21, and at the end of the experiment. Balanced Health Care Dan slightly increased body weight nonsignificantly, demonstrating that the formula did not exhibit any acute toxicity effects on the animals' growth and development (Hamaguchi et al., 2019).
## Analysis of hematology
Hematology index is an important indicator in safety evaluation (Rosa et al., 2018). Serum biochemical profile derangement can reflect nutrient metabolism abnormality (Ca Llens & Bartges, 2015; Gwinn et al., 2020; Paiano et al., 2019), as damaged tissues or organs modify the serum parameters. We detected blood biochemical indexes including ALB, ALP, ALT, AST, CREA, Urea, CHO, and TG at day 28 (Figures 2 and 3). These indicators can reflect liver, kidney function, and lipid metabolism. Liver and kidney are important target organs of many toxins, which can alter relevant indicators after intragastric administration (Calle‐Toro et al., 2020; Ursell et al., 2012). As the important indexes of liver and kidney function, the values of ALP, ALT, AST, and Urea were not significantly different between groups. In female groups, the mean values of ALB and CREA were slightly lower in low‐dose and high‐dose groups, respectively, compared with control. However, these differences were within the normal range, which possibly resulted from individual differences between mice. Such differences were not seen in male mice.
**FIGURE 2:** *Blood biochemistry of male and female mice during treatment. (a) Male mice; (b) female mice* **FIGURE 3:** *Lipid profile of male and female mice during treatment. (a) Male mice; (b) female mice. CHO, cholesterol; TG, total triglycerides*
The lipid profile reflects basic metabolism of the body. CHO and TG are the common indicators for lipid metabolism. In the female group, the mean value of TG in the low‐dose group reduced slightly compared with that of the control group indicating the treatment might have a hypolipidemic effect. Since this beneficial effect was not shown in the high‐dose group, we think the difference was more likely attributed to the background variability and sporadic deviation.
## Analysis of pathology in organs
A complete gross necropsy and microscopic anatomic pathological analysis of organs were conducted on all animals after the 4‐week feeding study. The liver, kidney, lung, and spleen showed no pathological lesions (data not shown) in the low‐dose or high‐dose groups. From the pathological point of view, Balanced Health Care Dan does not have any toxic or adverse effects.
## Analysis of microbiota
Intestinal flora is known as the “second fingerprint” of the body (Duffy et al., 2015; Ursell et al., 2012), which has an enormous impact on the nutrition and health status of the host. Diet directly affects the balance of intestinal microbiota. For edible safety evaluation, analyzing the changes in intestinal microbiota can directly reflect the effects of tested materials on body health (Barko et al., 2017). Therefore, the evaluation of intestinal microbiota is an important part of edible safety evaluation. Thus, in this study, the intestinal contents were used to evaluate the effects of Balanced Health Care Dan.
Based on metagenomic sequencing, in both male and female mice, the relative abundance of intestinal microbiota changed after TMF treatment (Figure 4a,b). Principal coordinates analysis showed significant changes in the intestinal microbiota of male mice in the TMF‐treated group compared with the control group, while the effect of TMF on the intestinal microbiota of female mice was relatively small (Figure 4c,d). The Shannon index was used to measure species diversity (Nielsen, 2021). From Figure 4e,f, TMF treatment significantly altered the structure and composition of intestinal microbiota in both male and female mice.
**FIGURE 4:** *The overall level of intestinal flora change. (a, b) The overall level of intestinal flora changed; (c) OUT analysis of each dose group in male mice; (d) OUT analysis of each dose group in female mice; (e) principal coordinate analysis of male mouse samples; (f) principal component analysis of female mouse samples*
From the analysis of the genus level, the most predominant 10 genera were identified (Figure 5a). According to the species annotation and abundance information of all samples at the genus level, correlation heatmap was applied to represent the top 35 genera (Figure 5b). Compared with the control group, Bacteroides, Ralstonia, Bilophila, Muribaculum, Prevotellaceae, Alistipes, and Anaerotruncus all showed significant changes in the TMF treatment groups (Figure 5c). The results showed that there were different effects on the mice by gender. For example, in male mice groups as shown in Figure 6a,b, Deterribacteres, Deferribacteraceae, Deferribacterales, and Deferribacteres increased significantly compared with the control group. Bacteroides acidifaciens and Proteobacteria decreased significantly. In addition, the proportion of Firmicutes to Bacteroidetes did not change significantly. While in the female mice groups as shown in Figure 6c,d, Muribaculaceae, Blautia, Bacteroides caccae, Clostridia, Firmicutes, Lachnospiraceae Bacterium, Lachnospiraceae, Oscillibacter, Lachnospirales, Oscillospirales, and Osillospiraceae increased significantly. Mucispirillum, Deferribacteraceae, Deferribacterales, Deferribacteres, Bacteroides, Alistipes Bacteroidaceae, Bacteroides acidifaciens, Bacteroidales, Bacteroidota, Bacteroidia, and Rikenellaceae significantly decreased. In addition, the proportion of Firmicutes to Bacteroidetes increased significantly. In order to further confirm the gender difference in the effects of TMF, functional predictive analysis was performed. As shown in Figure 7a, there was no difference in the top 10 functions between male and female mice although the function of the top 35 varied by gender (Figure 7b).
**FIGURE 5:** *Changes in the level of intestinal flora. (a) Top 10 dominant bacteria analysis; (b) top 35 dominant bacteria thermogram analysis; (c) abundance distribution box map between groups* **FIGURE 6:** *LEfSe analysis in mice (a) and (b) Male mice; (c) and (d) female mice.* **FIGURE 7:** *Function prediction analysis. (a) Relative abundance analysis of top 10 function annotation; (b) cluster analysis of relative abundance of top 35 function; (c) functional notes of Venn diagram; (d) function annotation of petal diagram*
There was more Bacteroides in the gut of colorectal cancer patient, which indirectly proved that the decrease in Bacteroides had a positive effect on the body's resistance to cancer (Garrett, 2019). Proteobacteria have been found to dominate the intestinal microbiota in acute and chronic inflammation caused by infectious pathogens or protozoan parasites, and the same phenomenon has been found in colorectal cancer associated with enteritis in animal and human experiments (Da et al., 2020). Balanced Health Care Dan significantly reduced Proteobacteria, indicating that Balanced Health Care Dan could help improve body health. The abundance of Muribaculaceae was negatively correlated with proinflammatory factors (Chung et al., 2020), and the increasing abundance of Muribaculaceae in the study indicated that it could protect intestinal health. The abundance of Lachnospiraceae was increased in the TMF treatment groups. As a potentially beneficial bacterium, Lachnospiraceae participates in the metabolism of a variety of carbohydrates, among which acetic acid, the fermentation product, is the main source of energy for the host (Vacca et al., 2020). In contrast to the control groups, the proportion of Firmicutes to Bacteroidetes in male mice groups did not change but significantly increased in female mice group, suggesting that after high‐dose treatment, female mice were more likely to absorb heat to maintain body weight (John & Mullin, 2016; Zhao et al., 2021). Therefore, to a certain extent, Balanced Health Care Dan improved the intestinal microbiota of mice and then might increase body immunity. This effect was more obvious in male mice. Therefore, from the point of view of intestinal health, Balanced Health Care *Dan is* not potentially harmful to the body. It needs to be noted that there are huge differences in gut microbiota between human and animal models, which are caused by species‐specific differences, evidenced by host–microbial interactions, environment, diet, and genetic responses (Nguyen et al., 2015). Further clinical trial is still necessary for accurate risk assessment.
## CONCLUSION
In a 4‐week animal feeding test, we evaluated edible safety of a traditional formula‐based medicinal food called Balanced Health Care Dan by examining behavior performance, body weight, relevant blood parameters, pathological phenotype, and intestinal microbiota in lung tumor‐loaded mice. We found no treatment‐related adverse effects in this short‐term toxicity evaluation experiment. This research provides a new strategy for the edible safety evaluation of TMFs which have the potential to be used for cancer patients.
## CONFLICT OF INTEREST
The authors declare no conflict of interest in this study.
## DATA AVAILABILITY STATEMENT
The datasets used and/or analyzed during this study are available from the corresponding author upon reasonable request. |
# 4‐PBA inhibits hypoxia‐induced lipolysis in rat adipose tissue and lipid accumulation in the liver through regulating ER stress
## Abstract
High‐altitude hypoxia may disturb the metabolic modulation and function of both adipose tissue and liver. The endoplasmic reticulum (ER) is a crucial organelle in lipid metabolism and ER stress is closely correlated with lipid metabolism dysfunction. The aim of this study is to elucidate whether the inhibition of ER stress could alleviate hypoxia‐induced white adipose tissue (WAT) lipolysis and liver lipid accumulation‐mediated hepatic injury. A rat model of high‐altitude hypoxia (5500 m) was established using hypobaric chamber. The response of ER stress and lipolysis‐related pathways were analyzed in WAT under hypoxia exposure with or without 4‐phenylbutyric acid (PBA) treatment. Liver lipid accumulation, liver injury, and apoptosis were evaluated. Hypoxia evoked significant ER stress in WAT, evidenced by increased GRP78, CHOP, and phosphorylation of IRE1α, PERK. Moreover, Lipolysis in perirenal WAT significantly increased under hypoxia, accompanied with increased phosphorylation of hormone‐sensitive lipase (HSL) and perilipin. Treatment with 4‐PBA, inhibitor of ER stress, effectively attenuated hypoxia‐induced lipolysis via cAMP‐PKA‐HSL/perilipin pathway. In addition, 4‐PBA treatment significantly inhibited the increase in fatty acid transporters (CD36, FABP1, FABP4) and ameliorated liver FFA accumulation. 4‐PBA treatment significantly attenuated liver injury and apoptosis, which is likely resulting from decreased liver lipid accumulation. Our results highlight the importance of ER stress in hypoxia‐induced WAT lipolysis and liver lipid accumulation.
Enhanced ER stress mediated WAT lipolysis was observed in a rat model of high‐altitude hypoxia, which contribute to hepatic dysfunction and apoptosis through excess release of FFA. Our findings highlight the vital role of 4‐PBA in WAT lipolysis and liver dysfunction via regulating ER stress, which may provide novel insights into systemic metabolic disturbances in high‐altitude area.
## INTRODUCTION
Ascent to high altitude is associated with multi physiological and metabolic responses to counter with the stress of hypobaric hypoxia. White adipose tissue (WAT) is the largest reservoir of energy reserves, which stores energy in the form of triglyceride in lipid droplets. WAT plays an essential role in maintaining the whole‐body lipid metabolism homeostasis and accumulated evidence has demonstrated the functional association between adipose tissue and liver (Natarajan et al., 2017; Sun et al., 2012). In our previous work, hypobaric hypoxia was proved to accelerate lipolysis and suppress lipogenesis of WAT (Xiong et al., 2014). Under normal conditions, the lipid metabolism is a dynamic equilibrium process between different organs. However, under hypoxia environment, the activation of lipolysis promotes excessive free fatty acids (FFA) release, which is taken up by the liver, contributing to ectopic lipid accumulation and pathogenesis of liver (Lefere et al., 2016). Adipose tissue dysfunction could lead to increased delivery of FFA and glycerol to the liver which drives hepatic gluconeogenesis and facilitates the accumulation of lipids and insulin signaling inhibiting lipid intermediates (Bosy‐Westphal et al., 2019). Herein, hypoxia caused lipid metabolism disorder of WAT may further influence liver function, leading to the maladaptation to high‐altitude environment and increasing the incidence of acute mountain sickness (AMS).
The endoplasmic reticulum (ER) is an organelle that functions to synthesize, fold, and transport proteins. It is also the site of triglyceride synthesis and nascent lipid droplet formation (Nettebrock & Bohnert, 2019). The sensing, metabolizing, and signaling mechanisms for lipid metabolism exist within or on the ER membrane domain (Balla et al., 2020). Dysregulation of ER homeostasis led to accumulation of misfolded proteins in the ER lumen and evoke ER stress (Henne, 2019). To reduce ER stress, the unfolded protein response (UPR) signal pathways are activated. Recently, accumulated evidence suggested that ER homeostasis and UPR activation play an important homeostatic role in lipid metabolism (Basseri & Austin, 2012; Mohan et al., 2019). As reported by Deng et al., ER stress could induce lipolysis by activating cAMP/PKA and ERK$\frac{1}{2}$ pathways (Deng et al., 2012). Previous study also found that burned patients displayed significant ER stress within adipose tissue and ER stress could augment lipolysis in cultured human adipocytes (Bogdanovic et al., 2015).
The disulfide bond formation during protein synthesis is independent of oxygen, however, the post‐translational protein folding and isomerization process is oxygen‐dependent (Koritzinsky et al., 2013). Herein, hypoxia exposure could induce extensive protein modification in the ER and result in the accumulation of misfolded/unfolded proteins, which activate UPR and evoke ER stress (Chipurupalli et al., 2019; Maekawa & Inagi, 2017). We decided to test the hypothesis that ER stress may modulate hypoxia‐induced WAT metabolic derangement and liver dysfunction based on the following evidence: [1] ER is one of the major sites of lipid metabolism. [ 2] lipid metabolism and function are sensitive to oxygen concentration. [ 3] Hypoxia could induce ER stress due to the accumulation of misfolded proteins (Xu et al., 2015; Yang et al., 2014). [ 4] ER stress is closely correlated with lipid metabolism dysfunction (Mohan et al., 2019). [ 5] lipid metabolism in WAT plays a critical role in the progression of liver dysfunction (Dong et al., 2020).
To address this issue, we investigated the effects of ER stress in hypoxia‐induced lipolysis using chemical chaperone 4‐PBA, antagonist of ER stress. The main objective of this study was to clarify the role of ER stress which regulates WAT lipolysis and liver lipid accumulation under continuous high‐altitude hypoxia exposure. An understanding of the interplay between tissues and these proposed mechanisms may provide novel therapeutic strategies for the treatment of the whole‐body metabolism dysfunction at high altitude.
## Animals care
Adult male Sprague–Dawley rats (280–330 g) were purchased from Weitong Lihua Laboratory Animal Limited Company. The rats were housed at room temperature (22°C–25°C) and in a 12–12 h light–dark cycle with free access to food and water and adapted to the condition above for 1 week before experiment. All experiments were conducted in strict accordance with the laboratory animal care guidelines published by the US National Institutes of Health (NIH publication no. 85–23, revised 1996). All protocols concerning animal use were approved by the Institutional Animal Care and Use Committee of Institute of Basic Medical Sciences, Peking Union Medical College and Capital Medical University.
## Hypoxic challenge
Hypoxia group rats were placed in a hypobaric chamber (Guizhou Fenglei Air Ordnance Co., Ltd.) and subjected to hypoxia mimicking an altitude of 5500 m for 10 days. The chamber was opened daily for 30 min to clean and replenish food and water and room temperature was kept at 20°C–22°C. We monitored the body weights of rats every day. 4‐PBA (P21005) was commercially purchased (Sigma‐Aldrich). Rats were randomly divided into four groups: [1] Control group, [2] Hypoxia group, [3] Control + 4‐PBA (30 mg/kg /day), and [4] Hypoxia + 4‐PBA (30 mg/kg/day). The dose of 4‐PBA was set based on previous reports (Luo et al., 2015; You et al., 2019; Zeng et al., 2017). All the rats were sacrificed by decapitation and serum was obtained by centrifugation and stored at −80°C. The perirenal fat pads were collected and weighed immediately, frozen in liquid nitrogen, and stored at −80°C.
## Histology staining
WAT and liver tissue were fixed in $4\%$ paraformaldehyde overnight, followed by embedment in paraffin and longitudinal slicing, with 4‐μm‐thick sections obtained for hematoxylin‐eosin (HE) staining. The stained slides were examined by microscopy for histomorphological analyses. A commercial terminal deoxynucleotidyl transferase‐mediated dUTP nick‐end labeling (TUNEL) kit (Roche) was employed to assess the degree of hepatic cell apoptosis. Histological alterations were assessed in randomly selected histological fields at ×400 magnification and apoptosis index (AI) was calculated.
## Western blotting and densitometry analyses
Homogenized rat WAT was lysed in 200 μl RIPA lysis buffer (Beyotime, P0013B) with $1\%$ phenylmethyl sulfurylfluoride and $4\%$ complete protease inhibitor cocktail mix (Roche). Extracts were centrifuged at 14,000 g for 15 min at 4°C. Eighty micrograms of total protein was used for sodium dodecyl sulfate‐polyacrylamide gel electrophoresis, followed by transferring blotting to nitrocellulose membrane (Millipore Corp., Billerica). Membranes were then blocked with $5\%$ non‐fat‐dried milk in PBS for 1 h with gentle shaking. Membranes were incubated first with primary antibodies (dilution: 1:1000) overnight at 4°C, in $1\%$ BSA in PBS overnight at 4°C with shaking. The following primary antibodies were purchased from Cell Signaling Technology: anti‐p‐HSL (#4139), anti‐HSL (#18381), anti‐pPKA, anti‐perilipin, anti‐Phospho‐PKA Substrate (RRXS*/T*) (#9624), anti‐GRP78 (#3183 S), anti‐CHOP (#2895P), anti‐protein kinase‐like eIF2α kinase (PERK) (#3192 S), and their phosphorylated species. anti‐ATGL antibody (ab109251), anti‐CGI58 antibody (ab111984), and anti‐β‐actin antibody (ab6276) were purchased from Abcam. Then, membranes were washed and incubated with secondary antibodies for 2 h at room temperature. Finally, the samples were visualized by enhanced chemiluminescence using Tanon‐410 automatic gel imaging system (Shanghai Tianneng Corporation). After scanning, band density was analyzed using Image J 1.33 software (National Institutes of Health).
## Reverse‐transcription PCR and quantitative real‐time PCR
Total RNA was prepared from frozen liver tissues with TRIZOL (Invitrogen) reagent and the cDNA was synthesized using TransScript TM First‐Strand cDNA Synthesis Super‐Mix (TransGen Biotech, AT301). The program was run on a S1000 Thermal Cycler. Quantitative real‐time PCR was performed using the SYBR®Pre‐mix Ex TaqTMkit (Takara, RR420A) and analyzed in a step‐one plus RT‐PCR system (life science, Applied Biosystems). The primer sequences are listed in Table 1.
**TABLE 1**
| Primer ID | Primer sequence 5′–3′ | Accession No. | Product size (bp) |
| --- | --- | --- | --- |
| CD36 | Fed: TCCTCGGATGGCTAGCTGATT | NC_051339.1 | 150 |
| CD36 | Rev: TGCTTTCTATGTGGCCTGGTT | NC_051339.1 | 150 |
| FABP1 | Fed: CTTCTCCGGCAAGTACCAAGT | NM_012556.2 | 162 |
| FABP1 | Rev: CATGCACGATTTCTGACACCC | NM_012556.2 | 162 |
| FABP4 | Fed: GTAGAAGGGGACTTGGTCGTC | NM_053365.2 | 234 |
| FABP4 | Rev: GCCTTTCATGACACATTCCAC | NM_053365.2 | 234 |
| β‐Actin | Fed: CGTTGACATCCGTAAAGACC | NM_031144.3 | 260 |
| β‐Actin | Rev: GCTAGGAGCCAGGGCAGTA | NM_031144.3 | 260 |
## Serum measurements
Serum levels of non‐esterified fatty acid (NEFA) and glycerol were measured using NEFA kit (A042, Jiancheng Biotechnology) and Glycerol Assay kit (F005‐1, Jiancheng Biotechnology), respectively. These assays were performed according to manufacturer's instructions. Serum levels of triglyceride (TG), total cholesterol (TC), high‐density lipoprotein cholesterol (HDL‐C), and low‐density lipoprotein cholesterol (LDL‐C) were measured by an automatic biochemical analyzer (Chemray 240, Rayto Life and Analytical Sciences).
Serum alanine (ALT), aspartate aminotransferase (AST), and alkaline phosphatase (ALP) microplate test kits were obtained from Nanjing Jiancheng Bioengineering Institute. These assays were performed as previously described (Wang et al., 2020). Briefly, ALT, AST, and ALP activities were evaluated at 37°C for 15 min by assessing for a decrease in absorbance at a wavelength of 510 nm, with Chemi Lab ALT, AST, and ALP assay kits, respectively.
## Statistical analysis
The data are presented as mean ± standard error (SE). For Western blot, protein levels were normalized to β‐actin. Statistical significance is determined by one‐way Analysis of variance (ANOVA) or nonparametric for more than three groups. p‐Value <.05 was considered statistically significant (SPSS 18.0 software).
## Hypoxia exposure induces endoplasmic reticulum stress in WAT
To investigate the role of ER stress in WAT under hypoxia treatment, we first examined the expression of ER stress markers, namely GRP78 and CHOP (Figure 1a). Under ER stress conditions, increased GRP78 is dissociated from unfolded proteins and activates ER stress receptors triggering the UPR. As shown in Figure 1b,c, hypoxia exposure significantly increased levels of GRP78 and CHOP. Continuous hypoxia treatment also activated ER stress‐related pathways in rat adipose tissue, evidenced by enhanced p‐PERK/PERK ratio (Figure 1d) and p‐IRE1α/IRE1α ratio (Figure 1e). 4‐PBA treatment significantly attenuated hypoxia‐induced ER stress, evidenced by decreased GRP78, CHOP, p‐PERK/PERK ratio, and p‐IRE1α/IRE1α ratio in 4‐PBA + hypoxia group as compared with hypoxia group.
**FIGURE 1:** *Hypoxia exposure induces endoplasmic reticulum stress in the WAT. The expression levels of ER stress‐related genes in WAT are shown. (a) GRP78, CHOP, p‐PERK, PERK, p‐IRE1α, and IRE1α protein expression levels; (b) Relative GRP78 protein expression levels; (c) relative CHOP protein expression levels; (d) p‐PERK/PERK ratio; (e) p‐IRE1α/IRE1α ratio. Data are shown as the mean ± SE of at least two independent western blots, *p < .05, **p < .01, and ***p < .001 (control group vs. hypoxia group, n = 6/group). #
p < .05, ##
p < .01 (hypoxia group vs. hypoxia + 4‐PBA group, n = 6/group)*
## PBA treatment attenuates enhanced lipolysis in WAT induced by hypoxia
Compared with control group, exposure to hypoxia equivalent to an altitude of 5500 m for 10 days significantly reduced the body weight of rat and wet weight of perirenal fat (Figure 2a,b). Both serum levels of glycerol and FFA significantly increased in hypoxia group rats, indicating enhanced lipolysis under hypoxia exposure (Figure 2c,d). In support of these findings, histological analysis of WAT showed that continuous hypoxia significantly reduced the volume of adipocytes compared with that in control group rats (Figure 2e,f). Hypoxia exposure led to increased serum levels of triglycerides (TG), low‐density lipoprotein cholesterol (LDL‐C), while the levels of total cholesterol (TC) level and high‐density lipoprotein cholesterol (HDL‐C) did not change significantly (Figure 2g–j).
**FIGURE 2:** *4‐PBA treatment attenuates enhanced lipolysis in white adipose tissue under hypoxia. 4‐PBA treatment (30 μg/kg body weight by intra‐peritoneal injection) significantly attenuates hypoxia‐induced body weight (a) and WAT loss (b); (c) serum levels of glycerol; (d) serum levels of FFA; (e) representative images of HE‐stained sections of WAT (magnification, 400×); (f) changes in the adipocytes volume in WAT. (g) Serum levels of TG (h); (i) serum levels of TG; (j) serum levels of HDL‐C; (J) serum levels of LDL‐C. Data are shown as mean ± SE, *p < .05, **p < .01, (control group vs. hypoxia group, n = 6/group). #p < .05, ##p < .01 (hypoxia group vs. hypoxia + 4‐PBA group, n = 6/group)*
To investigate the effect of inhibition of ER stress on WAT lipolysis under hypoxia, we first evaluated the body weight and wet weight of perirenal fat in hypoxia rats with or without 4‐PBA treatment. 4‐PBA significantly attenuated the reduction of body weight and wet weight of perirenal fat after 10 days exposure to hypoxia (Figure 2a,b). In addition, inhibition of ER stress via 4‐PBA was associated with a significant reduction of lipolysis, evidenced by a significant reduction in serum glycerol and FFA levels (Figure 2c,d). Moreover, 4‐PBA treatment significantly attenuated hypoxia caused reduction of adipocyte volume (Figure 2f). 4‐PBA treatment effectively attenuated hypoxia‐induced increased levels of TG (Figure 2j).
## ER stress inhibition ameliorate hypoxia‐induced WAT lipolysis via cAMP/PKA pathway
Endoplasmic reticulum stress has been suggested to trigger lipolysis in adipocytes. The lipolysis process is closely correlated with the production of cAMP and activation of cAMP‐dependent protein kinase A (PKA). In our study, hypoxia challenge significantly increased pPKA production (Figure 3a,b), which phosphorylates HSL and perilipin (Miyoshi et al., 2006; Sztalryd et al., 2003). The p‐HSL/HSL ratio (Figure 3c) and p‐Perilipin/Perilipin (Figure 3d) significantly increased in the hypoxic group, which were attenuated by 4‐PBA treatment. Although the abundance of ATGL remained unchanged in the WAT of the hypoxia rats, the level of CGI‐58 significantly increased in the hypoxia rats compared with the control rats (Figure 3e,f). Taken together, these data indicated that the inhibition of ER stress was shown to alleviate hypoxia‐induced lipolysis mainly by blocking the activation of cAMP‐PKA‐pHSL/Perilipin pathway.
**FIGURE 3:** *ER stress inhibition ameliorates WAT lipolysis in hypoxia rats via cAMP/PKA pathway. 4‐PBA treatment significantly downregulated expression levels of WAT lipolysis‐related genes induced by hypoxia. (a) pPKA, p‐HSL, HSL, p‐Peri, Perilipin, CGI‐58, and ATGL protein expression levels; (b) Relative pPKA protein expression levels; (c) p‐HSL/HSL ratio; (d) p‐Peri/Peri ratio; (e) Relative CGI‐58 protein expression levels; (f) Relative ATGL protein expression levels. Data are shown as the mean ± SE, *p < .05, **p < .01, and ***p < .001(control group vs. hypoxia group, n = 6/group). #p < .05, ##p < .01 (hypoxia group vs. hypoxia + 4‐PBA group, n = 6/group)*
## PBA treatment ameliorated hypoxia‐induced liver lipid transport and accumulation
Under continuous hypoxia exposure, increased delivery of free fatty acids (FFA) caused by enhanced lipolysis in WAT may contribute to the lipid accumulation in the liver. As shown in Figure 4a, levels of FFA content significantly increased in hypoxia group rat liver, which was attenuated by 4‐PBA treatment. Lipid uptake in the liver was regulated by many transporters, including cluster of differentiation (CD36), fatty acid binding protein 1(FABP1), and FABP4. mRNA levels of CD36, FABP1, and FABP4 that regulate the entry of fatty acids into hepatocyte, are generally upregulated to cope with increased circulation FFAs (Figure 4b–d).
**FIGURE 4:** *4‐PBA treatment ameliorates hypoxia‐induced liver lipid accumulation in the liver. (a) 4‐PBA treatment significantly attenuates hypoxia‐induced FFA accumulation in the liver. Relative mRNA levels of (b) CD36 (c) FABP1 and (d) FABP4. Data are shown as the mean ± SE, *p < .05, **p < .01, and ***p < .001(control group vs. hypoxia group, n = 6/group). #p < .05, ##p < .01 (hypoxia group vs. hypoxia + 4‐PBA group, n = 6/group)*
## PBA treatment ameliorated hypoxia‐induced live hepatic injury and apoptosis
Hypoxia‐induced liver lipid accumulation may further trigger the pathogenesis of liver injury, serum levels of liver enzyme were tested to confirm our speculation. As shown in Figure 5a–c, the hypoxia group rat exhibited a marked increase in the levels of AST, ALT, and ALP ($p \leq .05$), indicating potential liver injury. However, the hypoxia + 4‐PBA group significantly decreased the levels of AST and ALT ($p \leq .05$) when compared with hypoxia group, indicating that 4‐PBA inhibits hypoxia‐induced hepatocellular injury.
**FIGURE 5:** *4‐PBA ameliorates hypoxia‐induced hepatic injury and apoptosis. (a) Serum levels of AST in rats exposed to hypoxic (n = 6) or normoxic (n = 6) conditions. (b) Serum levels of ALT; (c) serum levels of ALP; (d)apoptosis index of four group rats; (e) representative images of TUNEL‐stained sections of liver (magnification, 400×). Data are shown as mean ± SE, *p < .05, **p < .01, (control group vs. hypoxia group). #p < .05, ##p < .01 (hypoxia group vs. hypoxia + 4‐PBA group)*
The apoptosis status of rat liver exposed to hypoxia was evaluated with a TUNEL assay. As shown in Figure 5d,e, the percentage of apoptotic cells was significantly increased in hypoxia group as compared with control group, which was effectively attenuated by 4‐PBA treatment.
## DISCUSSION
Lipid metabolism in white adipose tissue played an essential role in maintaining energy homeostasis at high‐altitude area. In this study, WAT ER stress‐mediated lipolysis is enhanced in a rat model of high‐altitude hypoxia. Moreover, we found that increased FFA release results in liver lipid accumulation and liver dysfunction, which was attenuated by the inhibition of ER stress using 4‐PBA.
As ER membrane are located with a variety of lipid metabolism‐related enzymes and ER is the major site of lipid metabolism, ER is involved in the control of metabolic homeostasis via regulating lipid metabolism. Under normal conditions, ER in the adipocyte functions to meet the demands of protein synthesis and secretion, triglyceride synthesis, nascent lipid droplet formation, and nutrient sensing. However, ER function is overwhelmed and the UPR is activated under stressful conditions (Menikdiwela et al., 2019; Sikkeland et al., 2019). Therefore, perturbations in ER homeostasis exerts a vital pathogenic mechanism in multi metabolic disorders of adipose tissue (Khan & Wang, 2014; Suzuki et al., 2017). Adverse stimuli like hypoxia may pose challenges to adipocyte and induce ER stress. In the present study, continuous hypoxia exposure evoked ER stress in adipose tissue, evidenced by increased GRP78, CHOP, p‐PERK, and p‐IRE1α expression in rat WAT. Our finding is in accordance with previous studies showing that hypoxia exposure induce ER stress in 3 T3‐F442A and 3 T3‐L1 adipocytes (Mihai & Schroder, 2015). UPR pathways were activated to ameliorate the overload of unfolded proteins under ER stress, which in turn influence lipid metabolism (Song et al., 2016). The activation of ER stress in adipose tissue may further induce lipolysis and elevated circulating FFAs (Song et al., 2017).
To confirm the potential role of the ER stress and UPR in the modulation of the lipolysis, we treated rat with 4‐PBA, an ER stress inhibitor. 4‐PBA treatment led to significant reduction in lipolysis, which blocked the phosphorylation of HSL and perilipin. As the results from upstream regulation, 4‐PBA treatment then effectively reduced glycerol and FFA release from adipose tissue, suggesting that ER stress‐mediated lipolysis mainly by regulating cAMP‐PKA/HSL under hypoxia. Similar to our study, enhanced lipolysis and ER stress occurred in the visceral WAT and inhibition of ER stress alleviated lipolysis in a rat model of chronic kidney disease(Zhu et al., 2014). In addition, curcumin was reported to suppress the ER stress‐mediated lipolysis via cAMP/PKA/HSL pathway (Wang et al., 2016). Deng et al., also reported that ER stress involved lipolysis through up‐regulation of GRP78 and activation of phosphorylation status of PERK and eIF2α in rat adipocytes (Deng et al., 2012).
Since the liver is the largest metabolic organ and regulates various physiological and metabolic processes, it also performs a key role in high‐altitude adaptation (Xu et al., 2019). Adipose dysfunction is closely associated with metabolism‐related liver diseases, an understanding of the interplay between tissues and these proposed mechanisms is still necessary (Da Silva Rosa et al., 2020). Accumulating data are pointing out the pathophysiological role of ectopic fat accumulation in different organs, including the liver (Bosy‐Westphal et al., 2019). In this study, the increased uptake of circulating lipids induced by WAT lipolysis significantly stimulated hepatic expression of lipid uptake and transport proteins CD36 and FABP4, which resulted in excess fatty acid uptake and lipid over accumulation in the liver. As a result, hypoxia‐treated rats displayed increased liver enzymes and hepatic apoptosis. As shown in Figure 6, 4‐PBA effectively attenuated hypoxia‐induced lipolysis via cAMP‐PKA‐HSL/perilipin pathway. The protective effect of 4‐PBA on liver injury and apoptosis, is likely resulting from decreased liver lipid accumulation via inhibiting FFA transport. Lines of evidence proved that excess FFA may modify the biology and function of hepatocyte and play an essential role in the pathogenesis liver dysfunction (Pereira et al., 2021). A high serum level of saturated FFAs is associated with hepatocyte lipo‐apoptosis (Takahara et al., 2017). In line with our fundings, Hubel, E., et al. found that repetitive Amiodarone treatment led to ER stress and aggravated lipolysis in adipose tissue while inducing a lipotoxic hepatic lipid environment and hepatic injury (Hubel et al., 2021).
**FIGURE 6:** *Protective mechanisms of 4‐PBA inhibit hypoxia‐induced lipolysis in WAT and lipid accumulation in the liver through regulating ER stress. Treatment with 4‐PBA, inhibitor of ER stress, effectively attenuated hypoxia‐induced lipolysis via cAMP‐PKA‐HSL/perilipin pathway. In addition, 4‐PBA treatment significantly attenuated liver injury and apoptosis, which is likely resulting from decreased liver lipid accumulation via inhibiting FFA transport.*
In conclusion, enhanced ER stress‐mediated WAT lipolysis was observed in a rat model of high‐altitude hypoxia, which contributes to hepatic dysfunction and apoptosis through excess release of FFA. Our findings highlight the vital role of 4‐PBA in WAT lipolysis and liver dysfunction via regulating ER stress, which may provide novel insights into systemic metabolic disturbances in high‐altitude area.
## CONFLICT OF INTEREST
The authors declare no conflict of interests.
## DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available on request from the corresponding author. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.