Empowering Discovery,
Enhancing Knowledge
Latest News
Student-Teacher Prompting for Red Teaming to Improve Guardrails
This paper introduces a framework for evaluating the guardrails of large language models, focusing on Vicuna-13B. We assess its ability to learn to avoid generating harmful responses under 10 red-teaming methods. We provide a dataset with teaching prompts designed to elude the LLM from producing harmful responses, and two additional datasets containing red-teaming prompts. Our findings underscore the effectiveness of diverse teaching techniques in mitigating specific red-teaming impacts.
The first workshop on personalized generative ai@ cikm 2023: Personalization meets large language models
The First Workshop on Personalized Generative AI1 aims to be a cornerstone event fostering innovation and collaboration in the dynamic field of personalized AI. Leveraging the potent capabilities of Large Language Models (LLMs) to enhance user experiences with tailored responses and recommendations, the workshop is designed to address a range of pressing challenges including knowledge gap bridging, hallucination mitigation, and efficiency optimization in handling extensive user profiles. As a nexus for academics and industry professionals, the event promises rich discussions on a plethora of topics such as the development and fine-tuning of foundational models, strategies for multi-modal personalization, and the imperative ethical and privacy considerations in LLM deployment. Through a curated series of keynote speeches, insightful panel discussions, and hands-on sessions, the workshop aspires to …
FedNaWi: Selecting the Befitting Clients for Robust Federated Learning in IoT Applications
Federated Learning (FL) is an important privacy-preserving learning paradigm that is expected to play an essential role in the future Intelligent Internet of Things (IoT). However, model training in FL is vulnerable to noise and the statistical heterogeneity of local data across IoT clients. In this paper, we propose FedNaWi, a “Go Narrow, Then Wide” client selection method that speeds up the FL training, achieves higher model performance, while requiring no additional data or sensitive information transfer from clients. Our method first selects reliable clients (i.e., going narrow) which allows the global model to quickly improve its performance and then includes less reliable clients (i.e., going wide) to exploit more IoT data of clients to further improve the global model. To profile client utility, we introduce a unified Bayesian framework to model the client utility at the FL server, assisted by a small amount of auxiliary data. We …
Statistical Analysis Plan for the INTEnsive ambulance-delivered blood pressure Reduction in hyper-ACute stroke Trial (INTERACT4).
Introduction
Recruitment is complete in the fourth INTEnsive ambulance-delivered blood pressure Reduction in hyper-ACute stroke Trial (INTERACT4) is a multicenter, prospective, randomized, open-label, blinded endpoint assessed trial of pre-hospital blood pressure (BP) lowering initiated in the ambulance for patients with a suspected acute stroke and elevated BP in China. According to the registered and published trial protocol and developed by the blinded trial Steering Committee and Operations team, this manuscript outlines a detailed statistical analysis plan for the trial prior to database lock.
Methods
Patients were randomized (1: 1) to intensive (target systolic BP [SBP] 130-140 mmHg within 30 minutes) or guideline-recommended BP management (BP lowering only considered if SBP> 220 mmHg) group. Primary outcome is an ordinal analysis of the full range of scores on the modified Rankin scale at 90 …

InLighta Patents
Academic Papers and Presentations by Dr. Jenny Yang

Explore Dr. Jenny Yang’s related academic papers, conference presentations, and more.