A few years ago, I wrote about the “publish or perish” culture—the quiet pressure that pushes academics to keep producing, often at a pace that feels more mechanical than meaningful. That pressure has not disappeared. If anything, things have become smoother, almost too smooth. Complaints have faded, replaced by quiet adjustments. People have learned to work within the system. It is no longer about whether to publish, but how fast one can keep pace.
The goal, of course, is still reasonable. Research is supposed to help—improve teaching, guide policy, inform care. Open access came from that same idea. But more papers do not always mean better understanding. Sometimes, it just adds to the pile.
Most researchers are not careless. They follow the process. But within that process, small compromises slip in—a quicker method, a less emphasized result, a stronger-sounding conclusion. Not deception, just pressure at work. Recent studies show that these practices are not rare. In some surveys, about half of researchers—and in some cases even more—report engaging in at least one questionable research practice . These are small decisions, but repeated often enough, they begin to shape what we accept as evidence.
It points to a mismatch. Researchers think in terms of quality. Institutions often think in terms of numbers. A global study involving stakeholders from multiple countries noted this gap clearly: what scholars value does not always align with what systems reward. It is a quiet mismatch, but one that shapes behavior in very real ways.
In our universities, this plays out in familiar ways. Promotion systems, funding opportunities, and recognition often still hinge on measurable outputs. Faculty members learn quickly what counts. It is not unusual to see a single study divided into smaller pieces, each turned into a separate paper. It is efficient. It meets requirements. It also dilutes the original insight. Research that takes time rarely keeps pace with systems that reward speed.
The consequences reach beyond journals. Teaching strategies shift, health advice evolves, and students lean on shortcuts just to cope. The volume of research, instead of helping, can overwhelm.
Numbers make it seem manageable. Impact factors, citations, h-indices—they look objective. But they are limited. As the French Académie des Sciences warns, they can mislead when used alone. Popular does not always mean reliable. Numbers can signal attention, but not necessarily depth.
There is also the matter of whose research gets noticed. Global journals still tend to favor certain voices—often English, often Western, often technical in a specific way. That leaves studies rooted in Filipino realities—our classrooms, our coastal communities, our own ways of knowing—less visible, even when they matter most locally. When success is measured globally, relevance at home can fade into the background.
Of course, producing more work is not all bad. Early on, writing often helps researchers grow. You learn by doing. But when the goal becomes output itself, research starts to feel mechanical—submit, publish, repeat. The harder, messier questions get pushed aside because they take time.
There are signs of change, though they remain uneven. Initiatives like the San Francisco Declaration on Research Assessment (DORA) and more recent global efforts to reform research evaluation are pushing institutions to look beyond publication counts and journal metrics. The shift is gradual, but visible—from counting outputs to understanding impact and contribution.
At its core, this is about discipline—not holding back, but staying aligned with purpose. Not just asking what we produce, but whether it adds clarity or simply adds more noise. It requires a willingness to slow down, even when the system rewards speed.
The earlier conversation on “publish or perish” raised concerns about quantity and pressure. What seems clearer now is that the issue runs deeper. It is not only about how much is published, but about what is quietly lost when speed becomes the standard. When research becomes something to complete rather than something to understand, its value begins to thin.
A long list of publications can still impress. It fills reports, strengthens profiles, and meets institutional expectations. But beyond those numbers lies a quieter measure. Does the work help someone think more clearly? Does it improve practice, even in small ways? Does it hold up when questioned, not just when counted? These are harder to quantify, but they are closer to what research was meant to do.
What remains, then, is not a dramatic choice, but a repeated one. Whether to add another paper, or to strengthen the one already in hand. Whether to move quickly, or to stay with the difficulty a little longer. Whether to produce for visibility, or to work toward understanding and depth. The system may not change overnight, but these choices shape what the system eventually becomes.
Because research, stripped of metrics and rankings, is still a human effort. It carries the weight of decisions, contexts, and intentions. And in that quiet space where those decisions are made, quality, rigor, and depth are not enforced. They are chosen.|



















