More free COVID-19 tests from the government are available for home delivery through the mail – ABC News

More free COVID-19 tests from the government are available for home delivery through the mail – ABC News

Former British Prime Minister Boris Johnson ‘bamboozled’ by science, COVID-19 inquiry told – ABC News

Former British Prime Minister Boris Johnson ‘bamboozled’ by science, COVID-19 inquiry told – ABC News

November 21, 2023

LONDON -- LONDON (AP) Boris Johnson, the former British prime minister, struggled to come to grips with much of the science during the coronavirus pandemic, his chief scientific adviser said Monday.

In keenly awaited testimony to the countrys public inquiry into the COVID-19 pandemic, Patrick Vallance said he and others faced repeated problems getting Johnson to understand the science and that he changed his mind on numerous occasions.

I think Im right in saying that the prime minister gave up science at 15," he said. I think hed be the first to admit it wasnt his forte and that he struggled with the concepts and we did need to repeat them often.

Extracts from Vallace's mostly contemporaneous diary of the time were relayed to the inquiry. In them, he wrote that Johnson was often bamboozled by the graphs and data and that watching him get his head round stats is awful."

During the pandemic, Vallance was a highly visible presence in the U.K. He and the chief medical officer, Chris Whitty, regularly flanked Johnson at the daily COVID-19 press briefings given from the prime minister's offices on Downing Street.

Vallance, who stepped down from his role as the British government's chief scientific adviser earlier this year, said Johnson's struggles were not unique and that many leaders had problems in understanding the scientific evidence and advice, especially in the first stages of the pandemic in early 2020.

He recalled a meeting of European scientific advisers where one country leader was said to have problems with exponential curves and the telephone call burst into laughter, because it was true in every country.

Johnson was hospitalized with the virus in April 2020 less than two weeks after he put the country into lockdown for the first time. Vallance conceded the prime minister was unable to concentrate on things when he was really unwell but that after his recuperation there was no obvious change between him and what he was like beforehand.

The U.K. has one of the highest COVID-19 death tolls in Europe, with the virus recorded as a cause of death for more than 232,000 people.

Johnson, who was forced to step down as prime minister in September 2022 following revelations of lockdown rule-breaking parties at his Downing Street residence during the pandemic, is due to address the inquiry before Christmas.

The probe, led by retired Judge Heather Hallett, is expected to take three years to complete, though interim assessments are set to be published. Johnson agreed in late 2021 to hold a public inquiry after heavy pressure from bereaved families, who have hit out at the evidence emerging about his actions.

The inquiry is divided into four so-called modules, with the current phase focusing on political decision-making around major developments, such as the timing of lockdowns. The first stage, which concluded in July, looked at the countrys preparedness for the pandemic.

The inquiry is set to hear from current Prime Minister Rishi Sunak, who was Johnson's Treasury chief at the time and as such had a particular focus on the economic impacts of Britain's lockdowns.

When he does appear at the inquiry, Sunak is likely to face questioning about his Eat Out to Help Out initiative, which sought to encourage nervous customers back to restaurants in August 2020 as the first set of lockdown restrictions were being eased and before subsequent lockdowns were enacted.

Vallance said scientists weren't aware of the restaurant program until it was announced and that the messaging around it ran opposite to the need to limit mixing between households.

I think it would have been very obvious to anyone that this inevitably would cause an increase in transmission risk," Vallance said.

Soon after, positive cases started rising and the government came under huge pressure to institute a second national lockdown, something Johnson eventually announced at the end of October 2020.

The inquiry was shown a diary entry Vallance wrote before that lockdown and which referred to Dominic Cummings, Johnson's chief political adviser at the time, saying that Sunak thinks just let people die and thats OK."

When asked about the diary entry, the prime minister's spokesman, Max Blain, said Sunak would set out his position when he gives evidence to the inquiry.

Im sure the public will understand the importance of listening to all the evidence of the inquiry before coming to a conclusion," Blain said.

___

Associated Press journalist Jill Lawless contributed to this report.


Read the original post:
Former British Prime Minister Boris Johnson 'bamboozled' by science, COVID-19 inquiry told - ABC News
Seniors made up 63 percent of covid hospitalizations earlier this year – The Washington Post

Seniors made up 63 percent of covid hospitalizations earlier this year – The Washington Post

November 21, 2023

People 65 and older constituted nearly 63 percent of U.S. hospitalizations for covid-19, with the rate increasing with age, through the first eight months of 2023, according to a report from the Centers for Disease Control and Prevention.

The CDC found that people in that age group also represented more than half of the admissions to intensive care units in that period and nearly 90 percent of deaths among those hospitalized because of covid.

The hospitalization number reflects an increase from preceding months (March 2020 through December 2022), when about 46 percent of those hospitalized because of covid were 65 or older.

The report found that most older people hospitalized from January through August this year had at least one underlying health condition, and most had two or more. Most common were diabetes, kidney disorders, coronary artery disease, heart failure and obesity.

The report also noted that more than 75 percent of older adults who had been hospitalized with covid this year had not gotten the bivalent vaccine, which protects against the original coronavirus as well as subsequent variants and had been recommended last year for everyone 5 and older. This year, everyone 6 months and older is being urged to keep their coronavirus vaccinations up to date because the virus that causes covid-19 changes frequently.

The risk of contracting covid has been shown to increase with age, which has made older people with covid more likely to get very sick, need a ventilator to breathe and require hospitalization, often in an ICU.

Health experts stress that vaccination reduces the odds of hospitalization, long covid (symptoms or conditions that develop or linger after the initial infection) and dying. But it also protects others by limiting spread of the disease.

This article is part of The Posts Big Number series, which takes a brief look at the statistical aspect of health issues. Additional information and relevant research are available through the hyperlinks.


Go here to see the original: Seniors made up 63 percent of covid hospitalizations earlier this year - The Washington Post
New Omicron XBB.1.5 subvariant COVID vaccines to be rolled out across Australia from December – ABC News

New Omicron XBB.1.5 subvariant COVID vaccines to be rolled out across Australia from December – ABC News

November 21, 2023

If you were hoping for an early Christmas present, you might just be in luck: the federal government has announced updated COVID-19 vaccines will be available from December 11.

That means those who roll up their sleeves could have boosted protection againstsevere disease and hospitalisationby Christmas Day.

Experts say that's critical given Australia is in the midst of a fresh COVID-19 wave which could see millions infected (or re-infected) in the next few months, according to one of the country's eminent infectious disease experts.

The composition of the two new vaccines, which are already available in the US, is different to those you may have already received.

But that's what makes them more effective at this time.

Let's unpack the details.

The new vaccines are monovalent.

That means they're designed tospecifically target one COVID-19 variant.

This differs from the most recent boosters which were bivalent and tailored to protect against both the original strain of COVID-19 and the Omicron variant.

But COVID-19 has mutated several times since Omicron arrived in Australia in 2021, and we no longer see the original strain first detected in Wuhan.

Now, Omicron subvariants EG.5 (nicknamed Eris) and BA.2.86 (dubbed Pirola) are circulating Australia.

The World Health Organization (WHO) says it is "monitoring" Pirola but has classified Eris as a variant of interest given its prevalence.

The new vaccines don't specifically target these strains as they were made for another Omicron subvariant called XBB.1.5 (sometimes known as Kraken).

But that strain is very closely related to Pirola and Eris so the XBB.1.5 vaccines will offer crossover immunity, says Paul Griffin, director of infectious diseases at Mater Health Services.

"It's a much better match for what's going around at the moment."

In addition to the US, these vaccines have already been approved in Canada, Japan, Singapore and Europe.

The WHO has recommended COVID-19 vaccine formulations move away from the original strain.

Infectious disease expert Brendan Crabb told the ABC's News Daily if vaccines keep focusing on the original virus, people will be building immunity to an "irrelevant" strain of COVID-19.

DrGriffin says there's also a riskthat if we continue boosting our immunity to the original strain, we could actually hamper our immune response to new vaccine componentstargeting subvariants.

That's because "immunological memory" (which protects us from a pathogen long after vaccination) can interfere with the development of updated immune responses.

The Australian Technical Advisory Group on Immunisation (ATAGI) says all currently available COVID-19 vaccines are anticipated to provide benefits, but the new vaccines are "preferred".

"Available data suggests monovalent XBB vaccines provide modestly enhanced protection from severe disease compared to older vaccines," the group said in a statement.

DrGriffin says people shouldn't take this new advice to mean previousvaccines were ineffective though they were simply designed to target the virus at a different stage of its evolution.

The two big US pharmaceutical companies Pfizer and Moderna have made the new monovalent vaccines.

Like their previous COVID-19 vaccines, they are both mRNA vaccines. This means they use strands of genetic code to tell the body to construct spike proteins, which then elicit an immune response.

This is what will be on offer as first doses or boosters:

There are no monovalent XBB.1.5 vaccines registered for use in children aged six months to four years old. The original Pfizer vaccine is still the only formulation available for this age group.

The new vaccines were approved by the Therapeutic Goods Administration (TGA) in October but ATAGI also needed toassess them and provide advice to the Minister for Health, Mark Butler about their use.

Last night Mr Butler accepted ATAGI's advice and announced the December rollout.

"These new vaccines will help protect Australians against current strains of COVID-19 and demonstrate the government's ongoing commitment to provide access to the latest and most effective vaccines," he said in a statement.

But it's worth noting the existing bivalent vaccines will still be available. This contrasts with the US, which took the bivalentPfizer and Moderna vaccines off the market in September after approving their newXBB.1.5 cousin.

There is no change to the current vaccine eligibility recommendations by ATAGI,which you can check online.

However ATAGI doesn't recommend getting this updated vaccine if you're up to date with your 2023 boosters (or you had your primary vaccine dose less than six months ago) asyou shouldstill be well protected fromsevere disease.

The government says doctors and pharmacies can now order the new vaccines which will be delivered from December 11.

Providers who receive their orders earlier don't have to wait until December 11 to start administering them though.

DrGriffin says he welcomes the long-awaited rollout timeline but wishes it was "a little bit sooner".

He's not sure whether providers will be drip-fed small volumes of the vaccines at first or whether there's a large supply already in Australia.

In the next four to five months, Professor Crabb estimates somewhere between 3 and 5 million Australians will get COVID-19 if nothing is done to curb the spread.

He says as a result, tens of thousands of Australians could die early and there could be 50,000 to 100,000 cases of long COVID-19.

While the CEO of the Burnet Institute acknowledges COVID-19 is not an "emergency" anymore, he feels people are being too casual.

"If we don't have a deliberate effort as a world to reduce the amount of transmission, it's hard to see an end to this continual problem.

"It would be fantastic if the [new] vaccine was in people's arms before they encounter the virus."

DrGriffin says he appreciates there will be a lot of fatigue and anger about the current wave, especially as it coincides with Christmas.

For that reason he says the government needs to create an effective education campaign so people understand how these new vaccines are different and why they're worth getting.

"I think for many people, [boosters] have just become overwhelming we need to have a really good strategy so we can work on some of those information gaps and get the rate of uptake higher."


Read more:
New Omicron XBB.1.5 subvariant COVID vaccines to be rolled out across Australia from December - ABC News
Multicenter observational survey on psychosocial and behavioral … – Nature.com

Multicenter observational survey on psychosocial and behavioral … – Nature.com

November 21, 2023

Study design and study participants

From June 2021 to January 2022, we conducted SARS-CoV-2 antibody testing and administered a self-report questionnaire survey on the psychosocial and behavioral impacts of COVID-19 among PLHIV (aged18years) at 11 ART facilities in Northern Vietnam. These facilities were involved in an HIV research project entitled, Establishment of the bench-to-bedside feedback system for sustainable ART and the prevention of new HIV transmission in Vietnam, which started in 2019 under a Japanese government program, the Science and Technology Research Partnership for Sustainable Development (SATREPS). These 11 ART facilities were selected in consultation with the Vietnamese Ministry of Health from multiple perspectives, including region, facility level, HIV prevalence, and support from other overseas donors. Additionally, owing to SATREPS being implemented within the framework of the Japanese governments official development assistance, some facilities were selected with an intention to provide technical assistance to facilities with insufficient access to HIV services, such as viral load testing. The study sites include one national-level hospital (NHTD), seven provincial/city-level hospitals, and three district-level hospitals. Four of the 11 hospitals were located in Hanoi (Table 1).

The content validity of the revised questionnaire was determined by an expert panel comprising HIV/AIDS experts, including HIV clinicians and social epidemiologists. The panel reviewed each item to ensure that the questionnaire was comprehensive and that no items were inappropriate in the Vietnamese cultural context.

All outpatient PLHIV who visited the study sites during the survey period were invited to complete the survey and to have a blood sample collected for antibody testing during their regular consultations. Individuals who provided written informed consent participated in the survey on the same day as their consultation. Because data collection was carried out during the largest COVID-19 outbreak in Vietnam, those who could not visit the study sites during the study period because of movement restrictions and those who received ART via post or in other facilities could not participate in the survey.

Information on sex, age, and occupation before the COVID-19 pandemic were collected. Age was divided into four categories using interquartile values. Occupation before the pandemic fell into the following categories: salaried worker, self-employed, household worker, unemployed or student or retired, or other.

The incidence of SARS-CoV-2 infection was investigated using the questionnaire survey and laboratory-based immunoassay to detect anti-SARS-CoV-2 N IgG antibodies with an automated immunoassay system (ARCHITECT i2000, Abbott Laboratories Inc., Abbott Park, IL, USA) and a 6R86 SARS-CoV-2 IgG Reagent Kit (Abbott Laboratories Inc.). In the questionnaire, the self-reported incidence was assessed by asking whether the respondent had been diagnosed with SARS-CoV-2 infection in a polymerase chain reaction (PCR) test or quarantined as a close contact of another person with such a diagnosis. The methods used for IgG antibody testing were the same as those in a previous study conducted in 2020 at the NHTD and are described elsewhere15.

The SARS-CoV-2 IgG assay is designed to detect IgG antibodies to the nucleocapsid (N) protein of SARS-CoV-2 whereas the main target of SARS-CoV-2 vaccines available in Vietnam is the spike (S) protein at the time of the survey. No study participant had received an inactivated whole-cell vaccine containing the N protein. Therefore, our antibody test results were not affected by vaccine history.

The questionnaire included the total vaccine doses received so far, behaviors to prevent COVID-19 infection (i.e., wearing a mask, hand washing, gargling, social distancing, and others), changes in social contacts compared with those pre-COVID-19 (i.e., no change, reduced, or increased), and willingness to have a diagnostic test when having symptoms of COVID-19 (i.e., willing to be tested, not willing to be tested, or unsure). Participants who said they were not willing to be tested were asked the reasons. The response options included fear of HIV status being disclosed, fear of laboratory confirmation of COVID-19 infection, and fear of discrimination against people with COVID-19 infection, and participants could select all applicable options.

The questionnaire assessed participants experience of continuing ART during the outbreak (i.e., continued to receive ART without interruption or discontinuation owing to the pandemic) and their experience of receiving ART at another clinic that was not their regular hospital during the outbreak. Question responses were dichotomous. Additionally, participants were asked whether they had received any form of social support to continue ART and HIV care during the COVID-19 pandemic. Participants used free text to describe the type of social support that could be effective in ensuring continuity of ART during the pandemic. For the NHTD, all patients had scheduled visits to measure HIV viral load during JuneJuly each year. We obtained visit status and reasons for missing a scheduled visit, as well as data on HIV viral load in JuneJuly 2020 and 2021.

Alcohol intake, drug use, and sexual behaviors were assessed. Regarding alcohol intake, participants were asked whether they had any episodes of binge drinking, defined as >5 drinks on one occasion18, in the past month and any change in alcohol consumption before and after the COVID-19 pandemic. As for drug use, participants were asked whether they had used illegal drugs in the past 3months and whether there was a change in the amount of drug use before and after the pandemic. Finally, we queried changes in the number of sex partners and frequency of using a condom during sex, before and after the outbreak. For questions asking about these changes, possible responses were no change, decreased, and increased. For the items on alcohol intake and drug use, non-drinker and non-drug user were added to the response categories.

Changes in employment status were assessed using five categories (i.e., no change, lost job, reduced working hours, increased working hours, and other). Participants pre- and post-COVID-19 household income and their current financial status were also assessed. Financial status was rated according to categories (i.e., no problems, a little challenging, very challenging, and other). We also asked whether participants had ever received financial assistance from any public authority. Finally, participants reported any forms of emergency assistance that could be helpful to support their lives using six options (i.e., no need, cash benefit, food benefit, tax exemption, rent subsidy, and other); participants could select all applicable options.

In the first survey conducted at the NHTD during 2020, the Vietnamese version of the Depression, Anxiety and Stress Scale-21 (DASS-21-V) with a cutoff score of 34 was used to measure the general distress experienced by study participants15. However, one limitation of our previous study was that there was no comparable prevalence of general distress using that cutoff score in the Vietnamese PLHIV population before and after the COVID-19 pandemic.

Prior to the COVID-19 pandemic, the National Center for Global Health and Medicine began a collaboration with NHTD, a study site in the present research, to monitor clinical outcomes of PLHIV. Under this collaboration, we conducted a study to assess depression using the Center for Epidemiological Studies-Depression (CES-D) scale in an HIV patient cohort at the NHTD in 201619. To address limitations of the first survey and compare the prevalence of depression before and after the COVID-19 pandemic, we assessed depression using the CES-D, as well as general distress using the DASS-21-V, among participants in the second survey at the NHTD. The CES-D is a widely used self-reporting scale and there is strong evidence for both its reliability and validity in Vietnams HIV population, with a Cronbachs alpha of 0.81 and sensitivity and specificity of 79.8% and 83.0%, respectively, with a cutoff score of 1620. The CES-D comprises 20 items answered on a four-point scale ranging from 0 (rarely or none of the time) to 3 (most or almost all the time), except for four items that are positively worded and scored in reverse. Regardless of whether questions were scored in reverse, if a respondent provided the same answer to all 20 items, the CES-D score was considered invalid and was excluded from the analysis. We used the Vietnamese version of the CES-D, which was previously available20. We defined a CES-D score of 16 as experiencing depression because this cutoff has been proven optimal for assessing depression in Vietnams HIV population20. The DASS-21-V comprises 21 items answered on a four-point scale ranging from 0 (does not apply to me at all) to 3 (applies to me very much or most of the time). According to the scoring instructions21, the total score is calculated by multiplying the sum of the responses by two. In line with the first survey, a cutoff score of 34 was used to measure general distress. This cutoff score was suggested by Tran et al. to detect general distress, including depression and anxiety, with a sensitivity of 79.1% and specificity of 77.0% in Vietnamese women22.

First, we descriptively evaluated the incidence of SARS-CoV-2 infection, prevention against COVID-19 infection, impact of COVID-19 on ART continuity, economic security, and risky health behaviors among all study participants. Next, to assess changes in the impacts of COVID-19 during different pandemic phases, we compared differences in responses between the two surveys conducted at the NHTD in 2020 and 2021 using the McNemar test. Additionally, a study on depression among PLHIV patients was previously conducted at the NHTD in 2016, prior to the COVID-19 pandemic. Using that available data, the prevalence of and factors associated with depression were compared between the 2016 survey (pre-COVID-19) and the present 2021 survey (post-COVID-19). Comparisons between the 2020 and 2021 surveys and the 2016 and 2021 surveys were conducted for the same questions or using the same scales among those who participated in both surveys (i.e., the same population). Univariate logistic regression models were used to investigate factors associated with depression, and crude odds ratios (ORs) were calculated. As supplementary analysis, we performed logistic regression models using all data, including from participants who gave the same response to all items on the CES-D.

All analyses were performed using SAS 9.4 software (SAS Institute Inc., Cary, NC, USA). All tests were two-sided, with the significance level set at 5%. Missing data were excluded from the analyses.

The study was approved by the Human Research Ethics Committee of the National Center for Global Health and Medicine (reference: NCGM-G-003560-02) and the Bio-medical Research Ethics Committees of the National Hospital for Tropical Diseases (reference: 12/HDDD-NDTU). We performed this study in accordance with Japans Ethical Guidelines for Medical and Health Research Involving Human Subjects, issued by the Japanese Ministry of Health, Labour and Welfare.


Go here to see the original: Multicenter observational survey on psychosocial and behavioral ... - Nature.com
Evaluation of the US COVID-19 Scenario Modeling Hub for … – Nature.com

Evaluation of the US COVID-19 Scenario Modeling Hub for … – Nature.com

November 21, 2023

Overview of evaluation approaches for scenario projections

When evaluating the distance between a scenario projection and an observation, there are two potential factors at play: (1) the scenario assumptions may not match reality (e.g., scenario-specified vaccine uptake may underestimate realized uptake), and (2) if there were to be alignment between the scenario specifications and reality, model predictions may be imperfect due to miscalibration. The difference between projections and observations is a complex combination of both sources of disagreement, and importantly, observing projections that are close to observations does not necessarily imply projections are well-calibrated (i.e., for scenarios very far from reality, we might expect projections to deviate from observations). To address both components, we evaluated the plausibility of COVID-19 Scenario Modeling Hub (SMH) scenarios and the performance of SMH projections (ensemble and component models). A similar approach has been proposed by Hausfather et al.45. Below, we describe in turn the component models contributing to SMH, the construction of the ensemble, the evaluation of scenario assumptions, and our approaches to estimating model calibration and SMH performance.

SMH advertised new rounds of scenario projections across various modeling channels, using an open call to elicit projections from independent modeling teams. Scenario specifications were designed in collaboration with public health partners and modeling teams, and final specifications were published on a public GitHub repository (https://github.com/midas-network/covid19-scenario-modeling-hub). Teams then submitted projections to this same repository. For additional discussion about the philosophy and history of SMH, as well as details about SMH process, see Loo et al.43.

Over the course of the first sixteen rounds of SMH, thirteen independent models submitted projections, with most submitting to multiple rounds. Of participating models, prior experience in public health modeling varied substantially, ranging from teams with newly built models to address the COVID-19 pandemic and those with long-established relationships with local, state, and national public health agencies. The majority of submitting models were mechanistic compartmental models, though there was one semi-mechanistic model and two agent-based models. Some models were calibrated to, and made projections at, the county level, whereas others were calibrated toand made projections at the state level; many, but not all, had age structure. We have provided an overview of each model in TableS1. As models changed each round to accommodate different scenarios and adapt to the evolving pandemic context, we chose not to focus here on model-specific differences (in structure, parameters, or performance). For more information on round-specific implementations, we direct readers to other publications with details5,22.

Our analysis included state- and national-level projections of weekly incident cases, hospitalizations, and deaths from individual models and various ensembles for fourteen of the first sixteen rounds of SMH (Rounds 8 and 10 were not released publicly, and therefore are not included; see also TableS2 for a list of jurisdictions included). Each round included projections from between 4 and 9 individual models as well as ensembles. For a given round, modeling teams submitted projections for all weeks of the projection period, all targets (i.e., incident or cumulative cases, hospitalizations, and deaths), all four scenarios, and at least one location (i.e., states, territories, and national). Here, we evaluated only individual models that provided national projections in addition to state-level projections (i.e., excluding individual models that did not submit a national projection, though projections from these models are still included in the state-level ensembles that were evaluated). Submitted projections that did not comply with SMH conditions (e.g., for quantifying uncertainty or defining targets) were also excluded (0.8% of all submitted projections). Detailed description of exclusions can be found in TableS2.

Modeling teams submitted probabilistic projections for each target via 23 quantiles (e.g., teams provided projected weekly incident cases for Q1, Q2.5, Q5, Q10, Q20, , Q80, Q90, Q95, Q97.5, and Q99). We evaluated 3 methods for aggregating projections: untrimmed-LOP, trimmed-LOP (variations of probability averaging or linear opinion pool18, LOP), and median-Vincent (variation of quantile or Vincent averaging36,37 which is also used by other hubs11).

The untrimmed-LOP is calculated by taking an equally weighted average of cumulative probabilities across individual models at a single value. Because teams submitted projections for fixed quantiles, we used linear interpolation between these value-quantile pairs to ensure that all model projections were defined for the same values. We assumed that all projected cumulative probabilities jump to 0 and 1 outside of the defined value-quantile pairs (i.e., Q1-Q99). In other words, for a projection defined by cumulative distribution (F(x)) with quantile function ({F}^{-1}(x)), we assume that (F(x)=0) for all (x < {F}^{-1}(0.01)) and (F(x)=1) for all (x > {F}^{-1}(0.99).)

The trimmed-LOP uses exterior cumulative distribution function (CDF) trimming54 of the two outermost values to reduce the variance of the aggregate, compared to the untrimmed-LOP (i.e., the prediction intervals are narrower). To implement this method, we follow the same procedure as the untrimmed-LOP, but instead of using an equally-weighted average, we exclude the highest and lowest quantiles at a given value and equally weight all remaining values in the average. Under this trimming method, the exclusions at different values may be from different teams.

The median-Vincent aggregate is calculated by taking the median value for each specified quantile. These methods were implemented using the CombineDistributions package19 for the R statistical software55.

Projections in each SMH round were made for 4 distinct scenarios that detailed potential interventions, changes in behavior, or epidemiologic situations (Fig.1). Scenario design was guided by one or more primary purposes56, which were often informed by public health partners and our hypotheses about the most important uncertainties at the time. SMH scenarios were designed approximately one month before projections were submitted, and therefore 4-13 months before the end of the projection period, depending on the rounds projection horizon. Scenario assumptions, especially those about vaccine efficacy or properties of emerging viral variants, were based on the best data and estimates available at the time of scenario design (these were often highly uncertain). Here, our purpose was to evaluate SMH scenario assumptions using the best data and estimates currently available, after the projection period had passed. We assessed SMH scenarios from two perspectives:

based on their prospective purpose: we identified whether scenarios bracketed reality along each uncertainty axis (i.e., one axis of the 22 table defining scenarios, based on one key source of uncertainty for the round). Scenarios in most SMH rounds were designed to bracket true values of key epidemic drivers (though the true value was not known at the time of scenario design). In other words, along each uncertainty axis in an SMH round, scenarios specified two levels along this axis (e.g., optimistic and pessimistic assumptions). Here we tested whether the realized value fell between those two assumptions (if so, we called this bracketing).

for retrospective evaluation of calibration: we identified the set of plausible scenario-weeks for each round. One of our primary goals in this analysis was to assess and compare the calibration of different approaches (e.g., individual models, SMH ensemble, comparator models). To assess this in the most direct way possible, we chose scenarios and projection weeks that were close to what actually happened (i.e., we isolated error due to calibration by minimizing deviation between scenarios and reality; see Overview of evaluation approaches for scenario projections for details).

An evaluable scenario axis was defined as an axis for which assumptions could be confronted with subsequently observed data on epidemic drivers; in some instances, we could not find relevant data and the axis was not considered evaluable (e.g., NPI, see below). To evaluate scenario assumptions, we used external data sources and literature (TableS3). Due to differences across these sources, we validated each type of scenario assumption differently (vaccination, NPI, and variant characteristics; Fig.2), as described in detail below and in TableS3. Vaccine specifications and realized coverage are shown in Figs. S2S5, while details of our round-by-round evaluation are provided below.

Rounds 1-4 concentrated on the early roll-out of the vaccine in the US and compliance with NPIs. To evaluate our vaccine assumptions in these rounds, we used data on reported uptake from the US Centers for Disease Control and Prevention database57. For these rounds, scenarios prescribed monthly national coverage (state-specific uptake was intentionally left to the discretion of the modeling teams), so we only used national uptake to evaluate the plausibility of each vaccination scenario (Fig.S2). In these scenarios, bracketing was defined as reality falling between cumulative coverage in optimistic and pessimistic scenarios for 50% or more of all projection weeks. The plausible scenario was that scenario with the smallest absolute difference between cumulative coverage in the final projection week (or in cases of variant emergence, the last week of projections before emergence; details below) and the observed cumulative coverage. We also considered choosing the plausible scenario via the cumulative difference between observed and scenario-specified coverage over the entire projection period; this always led to selecting the same scenario as plausible.

When scenarios specified a coverage threshold, we compared assumptions with the reported fraction of people vaccinated at the end of the projection period. For instance, in Round 2 scenario C and D, we stipulated that coverage would not exceed 50% in any priority group, but reported vaccination exceeded this threshold. In Rounds 3-4, the prescribed thresholds were not exceeded during the truncated projection period.

By Round 5 (May 2021), vaccine uptake had started to saturate. Accordingly, in Rounds 5-7, vaccine assumptions were based on high and low saturation thresholds that should not be exceeded for the duration of the projection period, rather than monthly uptake curves. For these rounds, we evaluated which of the prescribed thresholds was closest to the reported cumulative coverage at the end of the projection period (Fig.S3). Later rounds took similar approaches to specifying uptake of childhood vaccination (Round 9) and bivalent boosters (Round 14-16). Rounds 9 (Fig.S4), and 14-15 (Fig.S5) specified weekly coverage and Round 16 specified a coverage threshold; we followed similar approaches in evaluating these scenarios.

For vaccine efficacy assumptions, we consulted population-level studies conducted during the period of the most prevalent variant during that round (TableS3). Similarly, for scenarios about emerging viral variants (regarding transmissibility increases, immune escape, and severity) and waning immunity, we used values from the literature as a ground truth for these scenario assumptions. We identified the most realistic scenario as that with the assumptions closest to the literature value (or average of literature values if multiple were available, TableS3).

Rounds 1-4 included assumptions about NPIs. We could not identify a good source of information on the efficacy ofand compliance to NPIs that would match the specificity prescribed in the scenarios (despite the availability of mobility and policy data, e.g., Hallas et al.58). Rounds 13-15 included assumptions about immune escape and severity of hypothetical variants that may have circulated in the post-Omicron era. Round 16 considered broad variant categories based on similar levels of immune escape, in response to the increasing genetic diversity of SARS-CoV-2 viruses circulating in fall 2022. There were no data available for evaluation of immune escape assumptions after the initial Omicron BA.1 wave. As such, NPI scenarios in Rounds 1-4 and immune escape variant scenarios in Rounds 13-16 were not evaluable for bracketing analyses, and therefore we considered all scenarios realistic in these cases. Overall, across 14 publicly released rounds, we identified a single most realistic scenario in 7 rounds, and two most realistic scenarios in the other 7.

Finally, in some rounds, a new viral variant emerged during the projection period that was not specified in the scenarios for that round. We considered this emergence to be an invalidation of scenario assumptions, and removed these weeks from the set of plausible scenario-weeks. Specifically, emergence was defined as the week after prevalence exceeded 50% nationally according to outbreak.info variant reports59,60,61, accessed via outbreak.info R client62. Accordingly, the Alpha variant (not anticipated in Round 1 scenarios) emerged on 3 April 2021, the Delta variant (not anticipated in Rounds 2-5) emerged on 26 June 2021, and the Omicron variant (not anticipated in Round 9) emerged on 25 December 2021.

To assess the added value of SMH projections against plausible alternative sources of information, we also assessed comparator models or other benchmarks. Comparator models based on historical data were not available here (e.g., there was no prior observation of COVID-19 in February in the US when we projected February 2021). There are many potential alternatives, and here we used three comparative models: naive, 4-week forecast, and trend-continuation.

The baseline naive model was generated by carrying recent observations forward, with variance based on historical patterns (Figs. S13S15). We used the 4-week ahead baseline model forecast from the COVID-19 Forecast Hub11 for the first week of the projection period as the naive model, and assumed this projection held for the duration of the projection period (i.e., this forecast was the naive projection for all weeks during the projection period). Because the COVID-19 Forecast Hub collects daily forecasts for hospitalizations, we drew 1000 random samples from each daily distribution in a given week and summed those samples to obtain a prediction for weekly hospitalizations. The naive model is flat and has relatively large prediction intervals in some instances.

As a forecast-based comparator, we used the COVID-19 Forecast Hub COVIDhub-4_week_ensemble ensemble model (Figs. S7S9). This model includes forecasts (made every week) from multiple component models (e.g., on average 41 component models between January and October 202111). We obtained weekly hospitalization forecasts from the daily forecasts of the COVID-19 Forecast Hub using the same method as the naive model. This 4-week forecast model is particularly skilled at death forecasts11; however, in practice, there is a mismatch in timing between when these forecasts were made and when SMH projections were made. For most SMH projection weeks, forecasts from this model would not yet be available (i.e., projection horizons more than 4 weeks into the future); yet, for the first 4 weeks of the SMH projection period, SMH projections may have access to more recent data. It should also be noted that the team running the COVID-19 Forecast Hub has flagged the 4-week ahead predictions of cases and hospitalizations as unreliable63. Further, SMH may be given an advantage by the post-hoc selection of plausible scenario-weeks based on the validity of scenario assumptions.

Finally, the trend-continuation model was based on a statistical generalized additive model (Figs. S10S12). The model was fit to the square root of the 14-day moving average with cubic spline terms for time, and was fit separately for each location. We considered inclusion of seasonal terms, but there were not enough historic data to meaningfully estimate any seasonality. For each round, we used only one year of data to fit the model, and projected forward for the duration of the projection period. The SMH ensemble consistently outperformed this alternative comparator model (see Figs. S16S21).

Prediction performance is typically based on a measure of distance between projections and ground truth observations. We used the Johns Hopkins CSSE dataset64 as a source of ground truth data on reported COVID-19 cases and deaths, and U.S. Health and Human Services Protect Public Data Hub65 as a source of reported COVID-19 hospitalizations. These sources were also used for calibration of the component models. CSSE data were only produced through 4 March 2023, so our evaluation of Rounds 13-16 ended at this date (1 week before the end of the 52 week projection period in Round 13, 11 weeks before the end of the 52 week projection period in Round 14, 9 weeks before the end of the 40 week projection period in Round 15, and 8 weeks before the end of the 26 week projection period in Round 16).

We used two metrics to measure performance of probabilistic projections, both common in the evaluation of infectious disease predictions. To define these metrics, let (F) be the projection of interest (approximated by a set of 23 quantile-value pairs) and (o) be the corresponding observed value. The (alpha)% prediction interval is the interval within which we expect the observed value to fall with (alpha)% probability, given reality perfectly aligns with the specified scenario.

Ninety-five percent (95%) coverage measures the percent of projections for which the observation falls within the 95% projection interval. In other words, 95% coverage is calculated as

$${C}_{95%}(F,o)=frac{1}{N}mathop{sum }limits_{i=1}^{N}1left({F}^{-1}(0.025)le ole {F}^{-1}(0.975)right)$$

(1)

where (1(cdot )) is the indicator function, i.e., (1({F}^{-1}(0.025)le ole {F}^{-1}(0.975))=1) if the observation falls between the values corresponding to Q2.5 and Q97.5, and is 0 otherwise. We calculated coverage over multiple locations for a given week (i.e., (i=1...N) for (N) locations), or across all weeks and locations.

Weighted interval score (WIS) measures the extent to which a projection captures an observation, and penalizes for wider prediction intervals35. First, given a projection interval (with uncertainty level (alpha)) defined by upper and lower bounds, (u={F}^{-1}left(1-frac{alpha }{2}right)) and (l={F}^{-1}left(frac{alpha }{2}right)), the interval score is calculated as

$${{{{{rm{I}}}}}}{{{{{{rm{S}}}}}}}_{alpha }(F,o)=(u-l)+frac{2}{alpha }(l-o)1(o, < ,l)+frac{2}{alpha }(o-u)1(u, < ,o)$$

(2)

where again, (1(cdot )) is the indicator function. The first term of (I{S}_{alpha }) represents the width of the prediction interval, and the second two terms are penalties for over- and under-prediction, respectively. Then, using weights that approximate the continuous rank probability score66, the weighted interval score is calculated as

$${{{{{rm{WIS}}}}}}(F,o)=frac{1}{K+1/2}left(frac{1}{2}{{{{{rm{|}}}}}}o-{F}^{-1}(0.5){{{{{rm{|}}}}}}+mathop{sum }limits_{i=1}^{K}frac{{alpha }_{K}}{2},{{{{{rm{I}}}}}}{{{{{{rm{S}}}}}}}_{alpha }right)$$

(3)

Each projection is defined by 23 quantiles comprising 11 intervals (plus the median), which we used for the calculation of WIS (i.e., we calculated ({{{{{rm{I}}}}}}{{{{{{rm{S}}}}}}}_{alpha }) for (alpha=0.02,,0.05,,0.1,,0.2,...,0.8,,0.9) and (K=11)). It is worth noting that these metrics do not account for measurement error in the observations.

WIS values are on the scale of the observations, and therefore comparison of WIS across different locations or phases of the pandemic is not straightforward (e.g., the scale of case counts is very different between New York and Vermont). For this reason, we generated multiple variations of WIS metrics to account for variation in the magnitude of observations. First, for average normalized WIS (Fig.3b), we calculated the standard deviation of WIS, ({sigma }_{s,w,t,r}), across all scenarios and models for a given week, location, target, and round and divided the WIS by this standard deviation (i.e., ({{{{{{rm{WIS}}}}}}/sigma }_{s,w,t,r})). Doing so accounts for the scale of that week, target, and round, a procedure implemented in analyses of climate projections67. Then, we averaged normalized WIS values across strata of interest (e.g., across all locations, or all locations and weeks). Other standardization approaches that compute WIS on a log scale have been proposed68, though may not be as well suited for our analysis which focuses on planning and decision making.

An alternative rescaling introduced by Cramer et al.11, relative WIS, compares the performance of a set of projections to an average projection. This metric is designed to compare performance across predictions from varying pandemic phases. The relative WIS for model (i) is based on pairwise comparisons (to all other models, (j)) of average WIS. We calculated the average WIS across all projections in common between model (i) and model (j), where ({{{{{rm{WIS}}}}}}(i)) and ({{{{{rm{WIS}}}}}}(j)) are the average WIS of these projections (either in one round, or across all rounds for overall) for model (i) and model (j), respectively. Then, relative WIS is the geometric average of the ratio, or

$${{{{{rm{relative; WIS}}}}}}={left(mathop{prod }limits_{j=1}^{N}frac{{{{{{rm{WIS}}}}}}(i)}{{{{{{rm{WIS}}}}}}(j)}right)}^{1/N}$$

(4)

When comparing only two models that have made projections for all the same targets, weeks, locations, rounds, etc. the relative WIS is equivalent to a simpler metric, the ratio of average WIS for each model (i.e., (frac{{{{{{rm{WIS}}}}}}(i)}{{{{{{rm{WIS}}}}}}(j)})). We used this metric to compare each scenario from SMH ensemble to the 4-week forecast model (Fig.4). For this scenario comparison, we provided bootstrap intervals by recalculating the ratio with an entire week of projections excluded (all locations, scenarios). We repeated this for all weeks, and randomly drew from these 1000 times. From these draws we calculated the 5th and 95th quantiles to derive the 90% bootstrap interval, and we assumed performance is significantly better for one scenario over the others if the 90% bootstrap intervals do not overlap. We also used this metric to compare the ensemble projections to each of the comparative models (Fig.S22).

In addition to traditional forecast evaluation metrics, we assessed the extent to which SMH projections predict the qualitative shape of incident trajectories (whether trends will increase or decrease). We modified a method from McDonald et al.40 to classify observations and projections as increasing, flat or decreasing. First, we calculated the percent change in observed incident trajectories on a two week lag (i.e., (log ({o}_{T}+1)-log ({o}_{T-2}+1)) for each state and target). We took the distribution of percent change values across all locations for a given target and set the threshold for a decrease or increase assuming that 33% of observations will be flat (Fig.S23). Based on this approach, decreases were defined as those weeks with a percent change value below 23% for incident cases, 17% for incident hospitalizations, and 27% for incident deaths, respectively. Increases have a percent change value above 14%, 11%, 17%, respectively. See Fig.S34 for classification results with a one week lag and different assumptions about the percent of observations that are flat.

Then, to classify trends in projections, we again calculated the percent change on a two week lag of the projected median (we also consider the 75th and 95th quantiles because our aggregation method is known to generate a flat median when asynchrony between component models is high). For the first two projection weeks of each round, we calculated the percent change relative to the observations one and two weeks prior (as there are no projections to use for reference in the week prior, and two weeks prior, projection start date). We applied the same thresholds from the observations to classify a projection, and compared this classification to the observed classification. This method accounts for instances when SMH projections anticipate a change in trajectory but not the magnitude of that change (see Fig.S44), and it does not account for instances when SMH projections anticipate a change but miss the timing of that change (this occurred to some extent in Rounds 6 and 7, Delta variant wave). See Figs. S24S33 for classifications of all observations and projections.

We assessed how well SMH projections captured incident trends using precision and recall, two common metrics in evaluating classification tasks with three classes: increasing, flat, and decreasing41. To calculate these metrics, we grouped all projections by the projected and the observed trend (as in Fig.5d). Let ({N}_{{po}}) be the number of projections classified by SMH as trend (p) (rows of Fig.5d) and the corresponding observation was trend (o) (columns of Fig.5d). All possible combinations are provided in Table2. Then, for class (c) (either decreasing, flat, or increasing),

precision is the fraction of projections correctly classified as (c), out of the total number of projections classified as (c), or

$${{{{{rm{precisio}}}}}}{{{{{{rm{n}}}}}}}_{c}=frac{{N}_{{cc}}}{{sum }_{j=1}^{3}{N}_{{cj}}}$$

(5)

For example, the precision of increasing trends is the number of correctly classified increases (({N}_{{II}})) divided by the total number of projections classified as increasing (({N}_{{ID}}+{N}_{{IF}}+{N}_{{II}})).

recall is the fraction of projections correctly classified as (c), out of the total number of projections observed as (c), or

$${{{{{rm{recal}}}}}}{{{{{{rm{l}}}}}}}_{c}=frac{{N}_{{cc}}}{{sum }_{j=1}^{3}{N}_{{jc}}}$$

(6)

For example, the recall of increasing trends is the number of correctly classified increases (({N}_{{II}})) divided by the total number of observations that increased (({N}_{{DI}}+{N}_{{FI}}+{N}_{{II}})).

In some instances, we provide precision and recall summarized across all three classes; to do so, we average precision or recall across each of the three projected classes (decreasing, flat, increasing). The code and data to reproduce all analyses can be found in the public Github repository69.

Further information on research design is available in theNature Portfolio Reporting Summary linked to this article.


See the rest here: Evaluation of the US COVID-19 Scenario Modeling Hub for ... - Nature.com
More US parents plan to vaccinate kids against RSV, flu than COVID … – University of Minnesota Twin Cities

More US parents plan to vaccinate kids against RSV, flu than COVID … – University of Minnesota Twin Cities

November 21, 2023

A Texas A&M University survey of US parents finds that 41% already had or would vaccinate their children against COVID-19, 63% against influenza, and 71% against respiratory syncytial virus(RSV) this fall and winter.

The study, published late last week in Vaccine, involved 5,035 parents of children younger than 18 years surveyed on September 27 and 28, 2023.

In total, 40.9% of respondents said they had or would vaccinate their children against COVID-19, while 63.3% said they would do so against flu, and 71.1% said their children would receive the RSV vaccine.

Predictors of intent to vaccinate included concerns about diseases (average marginal effects [AME] for COVID-19, 0.064; AME for flu, 0.060; and AME for RSV 0.048), as well as trust in health institutions (AME for COVID-19, 0.023; AME for flu, 0.010; AME for RSV, 0.028). Parents who had previously vaccinated their children were also more likely to pursue vaccination (AME for COVID-19, 0.176; AME for flu, 0.438; and AME for COVID-19, 0.194).

Relative to men, women were less likely to say they would vaccinate their children against COVID-19 and flu (AME for COVID-19, 0.076; AME for flu, 0.047). Respondents who indicated that vaccines were important were more likely to pursue vaccination for COVID-19 and RSV (AME, 0.097 and 0.072, respectively).

Worries about a link between vaccination and autismwhich studies have disprovenwere statistically significant for only COVID-19 (AME, -0.030). Relative to political conservatives, liberals were more likely to vaccinate against COVID-19 (AME, 0.076).

The large number of unvaccinated children will likely lead to large numbers of excessive disease in children.

Compared with Democrats, Republications were less inclined to vaccinate their children against COVID-19 (AME, -0.060), and Democrats had higher odds of seeking RSV vaccination (AME, 0.151). The most common reasons for vaccine hesitancy were doubts about safety and the need for vaccination and a lack of information.

"The large number of unvaccinated children will likely lead to large numbers of excessive disease in children," the authors wrote.


The rest is here:
More US parents plan to vaccinate kids against RSV, flu than COVID ... - University of Minnesota Twin Cities
COVID-19 rental assistance money still available in Butler County – WCPO 9 Cincinnati

COVID-19 rental assistance money still available in Butler County – WCPO 9 Cincinnati

November 21, 2023

BUTLER COUNTY, Ohio Roughly $1.5 million in rental assistance money is still available in Butler County, according to SELF, the organization administering the Emergency Rental Assistance Program for the county.

"We're hearing that we're still very much a godsend to a lot of households that have needed that assistance," said SELF executive director Jeffrey Diver.

He said while a lot of people see the COVID-19 pandemic as a thing of the past, its effects are still being felt by many and this program is one way to help.

Not everyone is eligible for the program, though. Here are the qualifications:

"Examples would be somebody who had their employment cut in terms of the number of hours they worked because of COVID and the effects of COVID," Diver said. "Or it could be the increased costs that COVID brought all families." He said the money can be used on rent, utilities and even cable.

"If individuals need that assistance, they need to get it in right away to us," Diver said.

He said SELF has had to put a pause on accepting applications several times already to give themselves time to catch up on processing the applications. And with more than 100 applications already submitted this time around, Diver said another pause could be coming sooner rather than later.

Anyone who needs help paying rent or utility bills and thinks they may qualify for the program can find the form to apply here.

Watch Live:

Good Morning Tri-State at 5AM


See the rest here: COVID-19 rental assistance money still available in Butler County - WCPO 9 Cincinnati
Why the COVID Pandemic Hit Non-White Americans the Hardest – NC State News

Why the COVID Pandemic Hit Non-White Americans the Hardest – NC State News

November 21, 2023

Mortality rose across all demographics during first few years of the pandemic, but COVID-19 hit non-white Americans the hardest. According to the U.S. Census Bureau and the National Center for Health Statistics, the largest increase in mortality in 2020 was among the American Indian and Native Alaskan populations, which saw an increase of 36.7%. The increase in mortality was 29.7% among Black Americans and 29.4% among Asian Americans. For comparison, the increase in mortality among white Americans was less than 20%.

A new book offers insights into why non-white Americans were disproportionately affected.

To learn more about the book, and how it sheds light on the role that racial inequality played and continues to play in shaping health outcomes in the United States, we talked with Melvin Thomas. Thomas is an author and co-editor of the book, Race, Ethnicity and the COVID-19 Pandemic, as well as a professor of sociology at NCState.

The Abstract: The term racial inequality covers a lot of ground. Which aspects of racial inequality in the United States are most relevant in the context of COVID-19? For example, are we talking about differences in health outcomes? Or are we talking about how inequality in other aspects of society contributed to those different outcomes?

Melvin Thomas: To understand the racial disparities in COVID-19 infections and deaths, we must understand the extent to which they are linked to racial inequalities more broadly. Persisting racial inequality in terms of income, occupational attainment, employment, and most other measures of socioeconomic well-being reveal the continuing impact of ongoing discrimination on Black, Latino/a, Asian, and Indigenous communities. Thus, racial and ethnic groups in the United States vary in vulnerability to COVID-19.

TA: The entire book is an examination of the role racial inequality played in determining how COVID-19 affected different groups, particularly during the first year of the pandemic. That being said, could you offer a concise overview of the relationship between inequality and the pandemic?

Thomas: Race and ethnicity are risk markers for other underlying conditions that affect health, including socioeconomic status, access to health care, and exposure to the virus related to occupation (e.g., frontline, essential and critical infrastructure jobs). In the United States, the parallel between racial inequality and COVID-19 infections, hospitalization and deaths is striking. The social determinants of racial disparities in socioeconomic status and the racial disparities in COVID-19 infections and deaths are the same the impact of contemporary and historical racism.

TA: What was the impetus for this book?

Thomas: COVID-19 is the most significant virus to touch people of all ethnic backgrounds in the United States since the 1918 influenza pandemic. Ongoing medical and epidemiological research on the nature of the COVID-19 virus is vitally important. However, it is equally important to understand the disparate impact of this pandemic on different social groups and communities.

TA: Now that the book is out, what are you hoping it will accomplish?

Thomas: We want to provide both hope and strategies for those who are interested in real social change that alleviates suffering and puts us on a path forward to ending systemic racism, which is the root cause of the racial disparities in the impact of COVID-19. In fact, because racial disparities in COVID-19 infections and deaths clearly map along with other racial disparities in such things as income, wealth, poverty, etc., we can expect future pandemics and traumas to follow the same pattern. Social crises always hit disadvantaged racial and ethnic groups much harder. We must remove all institutional policies and practices that reinforce the racial hierarchy.


Here is the original post: Why the COVID Pandemic Hit Non-White Americans the Hardest - NC State News
Newsroom – Newsroom – City of Burbank

Newsroom – Newsroom – City of Burbank

November 21, 2023

Los Angeles County Consumer & Business Affairs Media Release

The Los Angeles County Department of Consumer and Business Affairs (DCBA) is pleased to announce the new LA County Rent Relief Program. Put forward by the LA County Board of Supervisors, over $46 million in direct financial assistance will be provided to qualified landlords, helping to mitigate the detrimental economic impacts caused by the COVID-19 pandemic that have resulted in tenants falling behind on rent.

The LA County Rent Relief Program, administered by The Center by Lendistry, is set to provide grants up to $30,000 per unit to eligible landlords for expenses dating from April 1, 2022, to the present. With a focus on aiding small, mom-and-pop landlords who own up to four rental units, the Program aims to reduce tenant evictions due to rent arrears, maintain the viability of small-scale rental businesses, and ensure continuity of housing in the community.

Starting mid-December, landlords can apply for the LA County Rent Relief Program by visiting the portal at www.lacountyrentrelief.com. Applicants will receive free multilingual technical support from community partners to guide them through the application process and assist with gathering necessary documentation. Landlords are invited to visit the portal now and register to receive updates and a link to apply when the online application is opened. The website also provides links to additional resources designed to support both landlords and tenants.

The LA County Rent Relief Program will provide small rental property owners with much-needed direct financial assistance to alleviate the financial burdens they have experienced due to the COVID-19 pandemic, said DCBA Director Rafael Carbajal. By helping mom-and-pop landlords in LA County, we simultaneously address the dual impact of maintaining the availability of affordable housing in Los Angeles County and reducing the number of tenant evictions due to past due rent.

"We are proud to partner with the County in delivering crucial financial assistance. Mom-and-pop landlords, who are often from underserved communities, are the backbone of our local housing market. By helping them, we are ensuring that our most vulnerable community members continue to have a roof over their heads," said Tunua Thrash-Ntuk, President & CEO of The Center by Lendistry.

Funds will be allocated to qualified applicants across diverse cities and unincorporated areas of Los Angeles County, excluding the City of Los Angeles. Priority will be given to those showing the greatest need, guided by criteria including properties located in high-need areas as identified by the LA County Equity Explorer Tool, and income levels at or below 80% of the LA County Area Median Income (AMI). Direct payments will be made to landlords.

###

Click here to view the full media release on theThe Los Angeles County Department of Consumer and Business Affairs website.


Read the original post: Newsroom - Newsroom - City of Burbank
What is influenza B? | Louisville, Ky – Norton Healthcare

What is influenza B? | Louisville, Ky – Norton Healthcare

November 21, 2023

There are two types of flu viruses that make people sick each year, influenza A and influenza B. Influenza A viruses are also present in animals. Influenza B viruses are only present in humans.

Influenza A viruses are more common than influenza B among adults, and it is influenza A that causes seasonal flu epidemics most years in the United States.

Influenza B viruses also can cause seasonal epidemics, but influenza A viruses are the only ones that can cause a pandemic, which is a global spread of disease. Thats because InfluenzaA viruses can mutate or change more rapidly than influenza B viruses.

Both influenza A and influenza B are highly contagious, and their symptoms are similar: fever, headache, cough, sore throat, muscle ache, shortness of breath, vomiting and diarrhea. Untreated, flu symptoms can last for weeks.

The flu also can cause severe illness and make some chronic medical conditions, such as asthma, heart disease and diabetes, worse. In some cases, the flu can lead to death.

Influenza A is generally considered worse than type B influenza among adults, though symptoms vary from person to person. Most adults have built up immunity against type B influenza. Because they are different, its possible to be infected with both flu A and flu B at the same time.

Flu season runs through March, and while its best to get your shot early, the vaccine still can provide protection later in the season. Influenza vaccine is available across Louisville and Southern Indiana. If you think you have the flu, consult with your health care provider, especially if you are immunocompromised.

Influenza B symptoms can be severe in children. Children under 5 are at higher risk of serious flu complications. Children under 2 are at the highest risk for serious complications.

Flu viruses spread when a person who is infected sneezes or coughs and droplets travel to another persons nose, mouth, or eyes.

Thorough and frequent hand-washing is one important way to protect yourself against infection.

The best protection is a flu shot. The Centers for Disease Control and Prevention (CDC) recommends everyone 6 months and older receive a flu vaccine. All 2023-24 flu vaccines are quadrivalent, meaning they provide protection against four types of flu, two types of influenza A and two types of influenza B.

The seasonal flu vaccine can help prevent you from getting sick and also can be effective at keeping you from developing severe symptoms, said Mary Rademaker, M.D., medical director for Norton Immediate Care Centers. This is especially important for those who have cancer, survived cancer or have another condition that has weakened your immune system.

The seasonal flu vaccine allows your body to build up immunity against the flu without getting sick. The flu shot contains dead influenza virus, which activates your immune system to gear up against the real thing if you get an influenza virus infection. Dead flu virus wont give you the flu, however some people feel sluggishness and other side effects as their immune system activates after the vaccine.

Its important to get a flu shot every year because strains of the flu mutate over time. Getting a shot each year gives your body immunity to the latest strains of the influenza virus.

The CDC picks which strains of type A flu and type B flu each year depending on which flu viruses are making people sick prior to the upcoming flu season, how quickly the virus is spreading, whether the previous years vaccine will protect against those flu viruses and whether the vaccine could protect against multiple strains of the virus.

If you get the flu, your health care provider will help you decide how best to care for you based on test results determining which type of flu you have, your medical history and your symptoms. Some antiviral medications work better against type A flus, while others work better against type B.

Antiviral treatment is recommended as soon as possible ideally within 48 hours for anyone hospitalized with suspected or confirmed influenza; anyone who has severe, complicated or progressive flu; or anyone at higher risk for complications from the flu.


Read more:
What is influenza B? | Louisville, Ky - Norton Healthcare