Gallup reports a record-high level of subjective feelings of safety across nearly two decades of measurement. In 2024, 73% of adults worldwide answered “yes” to the question: “Do you feel safe walking alone at night in the area where you live?” This is the highest figure since 2006—despite historically high levels of armed conflict.
The study is based on surveys of 145,170 respondents aged 15+ across 144 countries and territories (fieldwork conducted in 2024). Probability-based nationally representative samples were used, with both telephone and face-to-face interviews. The core indicator is the response to the nighttime safety question; it is combined with trust in police and experiences of theft or assault to calculate the Law and Order Index.
The highest levels of perceived safety are observed in the Asia-Pacific region, Western Europe, the Gulf countries, and post-Soviet Eurasia (with some countries reporting over 90% “yes” responses). In Eurasia, the indicator has nearly doubled compared to 2006.
Africa and Latin America remain regions with the lowest subjective sense of safety. South Africa records the lowest score globally at 33%; neighboring Lesotho and Botswana are also near the bottom of the ranking. Against this backdrop, urban programs in countries such as Brazil demonstrate that local initiatives can meaningfully improve perceptions of safety.
A persistent gender gap remains particularly striking: in 2024, 67% of women and 78% of men reported feeling safe at night. This gap persists even in high-income countries (for example, a 26 percentage point gap in the United States; in Italy, women report one of the lowest “feeling safe” rates in the EU). The rise in subjective safety is driven by local factors: trust in neighbors and institutions, the quality of policing, stable behavioral norms, and urban safety infrastructure. In other words, personal safety is not only a consequence of peace but also a condition for achieving it. Strong local ties and effective institutions can sustain a sense of security even amid global instability.
Researchers from Harvard Medical School, Imperial College London, and Genentech have introduced PDGrapher—an AI system capable of designing recovery scenarios for disease-affected cells.
Traditional drug development relies on models that predict how cells respond to specific therapeutic interventions. PDGrapher works in the opposite direction: it infers which combinations of genes need to be perturbed to return a cell to a healthy state. Rather than describing responses to treatment, the model reconstructs the causal relationships underlying cellular recovery.
To enable this, the researchers designed PDGrapher as a graph neural network with two modules, where gene interactions function as nodes in a causal map. One module selects which genes should be “turned on” or “turned off,” while the other evaluates whether the proposed combination will produce the desired outcome.
Thanks to this architecture, PDGrapher is up to 25 times faster than existing models such as scGen and CellOT. Instead of brute-force searching thousands of combinations, it directly targets relevant ones. At the same time, prediction accuracy is higher, and results remain robust even for cell types not seen during training.
Across tests on 38 datasets, the model predicted therapeutic targets with 7–13% higher accuracy than comparable methods, and its overlap with real intervention points in genetic networks exceeded random baselines by 11.6%.
In the case of non-small cell lung cancer, PDGrapher identified key genes KDR and TOP2A and correctly predicted five of eleven targets of the drug Pralsetinib, which had not appeared in the training data.
PDGrapher introduces a new logic of scientific discovery: instead of analyzing how existing drugs work, it models potential future therapies based on causal relationships within the cell. This approach could radically accelerate drug development in oncology and other complex diseases, where traditional methods are constrained by experimental scale and incomplete data.
Researchers from the University of Southern California have introduced PDDL-INSTRUCT, a framework that teaches large language models to plan within strict logical systems. It addresses one of the core weaknesses of modern LLMs: low reliability in sequential reasoning tasks where a formally structured action plan, not just an answer, is required.
Language models typically lose accuracy when they must specify each action and its consequences in the correct order. PDDL-INSTRUCT integrates chain-of-thought reasoning with automated step verification, training AI systems not only to construct plans but also to validate them for logical consistency.
The framework is built on three key principles. First, the model explicitly reasons about which actions are possible, which states they lead to, and whether they violate system constraints. Second, the planning process is decomposed into elementary steps—from checking preconditions to verifying effects and invariants. Finally, an external verifier (VAL) is introduced during feedback: it does not merely flag errors but explains their logical nature in detail, allowing the model to consciously correct its reasoning.
The results are compelling: planning accuracy reached 94% on standard benchmarks—66% higher than baseline LLMs. Unlike typical models, PDDL-INSTRUCT does not simply memorize successful solutions; it reconstructs the logic behind them, aligning more closely with classical symbolic planners.
The study demonstrates that combining formal logic, step-by-step decomposition, and verification can significantly improve the reliability of language models in planning tasks. This line of research opens the door to deploying AI in domains where errors are unacceptable—robotics, logistics, industrial automation, and medicine—where every step must be both executed and justified.
Researchers from UC Berkeley and Google have introduced AlphaEvolve—an AI system demonstrating that artificial intelligence can contribute to breakthroughs in fundamental computational complexity theory, a field underpinning cryptography, telecommunications, and data-processing algorithms.
In this work, AlphaEvolve acted as an autonomous researcher, generating and testing hypotheses about the structure of complex mathematical objects previously considered out of reach for classical methods.
One of its key achievements involved new results on random graph problems. The model constructed rare types of graphs known as Ramanujan graphs, characterized by exceptional randomness and symmetry properties. AlphaEvolve generated examples with up to 163 vertices—previously believed to be computationally infeasible. These structures helped refine complexity bounds for well-known problems such as MAX-CUT and MAX-Independent Set, which describe optimal network partitioning and the identification of disconnected nodes.
In the second part of the study, the system generated new small combinatorial constructions—so-called “gadgets”—used in approximation theory. This field focuses on problems where exact solutions are computationally intractable, making it crucial to understand how close one can get to the optimal result.
AlphaEvolve helped refine these limits: for example, the theoretical approximation bound for MAX-4-CUT was adjusted from 0.9883 to 0.987. For mathematicians, such differences represent meaningful refinements of the boundaries of what is computationally achievable.
AlphaEvolve’s primary technological breakthrough is speed. Hypothesis verification became up to 10,000 times faster than traditional methods, enabling exploration of far more complex structures and graph types previously inaccessible even to the most powerful computing systems.
Such systems are especially important today, as the volume and complexity of digital data grow faster than humans can analyze or control them. Algorithms like AlphaEvolve are not merely computational accelerators; they are tools shaping new principles on which the security and resilience of the modern digital world are built.
Researcher and co-founder of the School of Education Sofia Smyslova examines how online higher education initiatives created by Russian academics in forced emigration after 2022 enable participants to engage in “pedagogical production of the future” and to rebuild connections between past, present, and future.
The study is based on participatory methods, including learning diaries and reflective interviews with instructors and students from two informal online projects operating abroad. It shows how these initiatives function as spaces of intellectual recovery and “safe zones” for discussing experiences of loss, exile, and the search for new meanings.
Learning often takes on a therapeutic dimension. Through open discussions and collective reflection, participants find new ways of relating to knowledge, community, and time. For some, this process becomes a source of renewed academic and political subjectivity; for others, it is an attempt to maintain a sense of stability amid crisis and uncertainty.
The online format intensifies this multilayered dynamic. It dissolves territorial boundaries and enables open dialogue between those who remain in Russia and those who have left, creating a shared learning space.
However, the flexibility of online education does not eliminate all constraints. Alongside new forms of interaction, old hierarchies persist. Scheduling tied to Moscow time makes participation nearly impossible for students in distant regions, and traditional lecture formats are often transferred into the digital environment without adaptation.
Smyslova emphasizes that education in exile is not a fixed format with predefined goals, but a living process in which future scenarios are continuously produced through everyday pedagogical practices. In her view, such initiatives not only sustain intellectual life under conditions of loss and uncertainty, but also serve as experimental spaces where possible futures begin to take shape in the present.