246
submitted 1 day ago* (last edited 16 hours ago) by lwadmin@lemmy.world to c/lemmyworld@lemmy.world

Hello World,

as many of you know, several newer Lemmy versions have been released since the once we are currently using.

As this is a rather long post, the TLDR is that we're currently planning for late January/early February to update Lemmy.World to a newer Lemmy release.

We're currently running Lemmy 0.19.3 with a couple patches on top to address some security or functionality issues.

As new Lemmy versions have been released, we've been keeping an eye on other instances' experiences with the newer versions, as well as tracking certain issues on GitHub, which might impact stability or moderation experience.

We updated to Lemmy 0.19.3 back in March this year. At that point, 0.19.3 had been released for a little over a month already and at that point all the major issues that troubled the earlier 0.19 releases had been addressed.

Several months later, in June, Lemmy 0.19.4 was released with several new features. This was a rather big release, as a lot of changes had happened since the last release. Only 12 days later 0.19.5 was released, which fixed a few important issues with the 0.19.4 release. Unfortunately, Lemmy 0.19.5 also introduced some changes that were, and to some part are still not fully addressed.

Prior to Lemmy 0.19.4, regular users may see contents of removed or deleted comments in some situations, primarily when using third party apps. Ideally, this would have been fixed by restricting access to contents of removed comments to community moderators in the communities they moderate, as well as admins on each instance. Deleted comments will be overwritten in the database after some delay, but they might still be visible prior to that. This is especially a problem when moderators want to review previously removed comments to either potentially restore them or to understand context in a thread with multiple removed comments. Lemmy modlog does not always record individual modlog entries for bulk-removed items, such as banning a user while also removing their content would only log their ban but not the individual posts or comments that were removed.

We were considering writing a patch to restore this functionality for moderators in their communities, but this is unfortunately a rather complex task, which also explains why this isn't a core Lemmy feature yet.

While admins can currently filter modlog for actions by a specific moderator, this functionality was lost somewhere in 0.19.4. While this isn't something our admin team is using very frequently, it is still an important feature to have available for us for the times we need it.

This also included a few security changes for ActivityPub handling, which resulted in breaking the ability to find e.g. Mastodon posts in Lemmy communities by entering the post URL in the search. It also caused issues with changes to communities by remote moderators.

The 0.19.4 release also broke marking posts as read in Sync for Lemmy. Although this isn't really something we consider a blocker, it's still worth mentioning, as there are still a lot of Sync for Lemmy users out there that haven't noticed this issue yet if they're only active on Lemmy.World. Over the last 2 weeks we've had nearly 5k active Sync for Lemmy users . This is unfortunately something that will break during the upgrade, as the API has changed in upstream Lemmy.

There are also additional issues with viewing comments on posts in local communities that appear to be related to the 0.19.4/0.19.5 release, appear to be a lot more serious. There have been various reports of posts showing with zero comments in Sync, while viewing them in a browser or another client will show various comments. It's not entirely clear to us right now what the full impact is and to what extent it can be mitigated by user actions, such as subscribing to communities. If anyone wants to research what is needed to restore compatibility and potentially even propose a patch for compatibility with both the updated and the previous API version we'll consider applying it as a custom patch on top of the regular Lemmy release.

If there won't be a Sync update in time for our update and we won't have a viable workaround available, you may want to check out !lemmyapps@lemmy.world to find potential alternatives.

There were also several instances reporting performance issues after their upgrades, although they seemed to mostly have been only for a relatively short time after the upgrades and not persistent.

Lemmy 0.19.6 ended up getting released in November and introduced quite a few bug fixes and changes again, including filtering the modlog by moderator. Due to a bug breaking some DB queries, 0.19.7 was released just 7 days later to address that.

Among the issues fixed in this release were being able to resolve Mastodon URLs in the search again and remote moderators being able to update communities again.

0.19.6 also changed the way post thumbnails generated, which resulted thumbnails missing on various posts.

A month later, now we're in December, 0.19.8 was released.

One of the issues addressed by 0.19.8 was Lemmy returning content of removed comments again for admins. For community moderators this functionality is not yet restored due to the complexity of having to check mod status in every community present in the comment listing.

At this point it seems that most of the issues have been addressed, although there seem to still be some remaining issues relating to thumbnails not reliably being created in some cases. We'll keep an eye on any updates on that topic to see if it might be worth waiting a little longer for another fix or possibly deploying an additional patch even if it may not be part of an official Lemmy release yet at the time.

While we were backporting some security/stability related changes, including a fix for a bug that can break federation in some circumstances when a community is removed, we accidentally reverted this patch while applying another backport, which resulted in our federation with lemmy.ml breaking back in November. This issue was already addressed upstream a while back, so other instances running more recent Lemmy versions were not affected by this.

Among the new features released in the Lemmy versions we have missed out on so far, here are a couple highlights:

  • Users will be able to see and delete their uploads on their profile. This will include all uploads since we updated to 0.19.3, which is the Lemmy version that started tracking which user uploaded media.
  • Several improvements to federation code, which improve compatibility with wordpress, discourse, nodebb.
  • Fixing signed fetch for federation, enabling federation with instances that require linked instances to authenticate themselves when fetching remote resources. Not having this is something we've seen cause issues with a small number of mastodon instances that require this.
  • Site bans will automatically issue community bans, which means they're more reliable to federate.
  • Deleted and removed posts and comments will no longer show up in search results.
  • Bot replies and mentions will no longer be included in notification counts when a user has blocked all bots.
  • Saved posts and comments will now be returned in the reverse order of saving them rather than the reverse order of them being created.
  • The image proxying feature has evolved to a more mature state. This feature intends to improve user privacy by reducing requests to third party websites when browsing Lemmy. We do not currently plan on enabling it with the update, but we will evaluate it later on.
  • Local only communities. We don't currently see a good use for these, as they will prevent federation of such communities. This cuts off users on all other instances, so we don't recommend using them unless you really want that.
  • Parallel sending of federated activities to other instances. This can be especially useful for instances on the other side of the world, where latency introduces serious bottlenecks when only sending one activity at a time. A few instances have already been using intermediate software to batch activities together, which is not standard ActivityPub behavior, but it allows them to eliminate most of the delays introduced by latency. This mostly affects instances in Australia and New Zealand, but we've also seen federation delays with instances in US from time to time. This will likely not be enabled immediately after the upgrade, but we're planning to enable this shortly after.

edit: added information about sync not showing comments on posts in local communities

166

Hello World,

today, @db0@lemmy.dbzer0.com has provided an update to the media upload scanner we're using. This should reduce the amount of false positives blocked from being uploaded. We have deployed the updated version now.

While we do not have stats about false positives from before we implemented the scan when uploading, those changes did not change the overall data availability for us. Flagged images were still deleted, they were just still served by our cache in many cases. By moving this to the upload process, it has become much more effective, as previously images could persist in Cloudflare's cache for extended periods of time, while now they won't get cached in the first place.

Over the last week, we've seen a rate of roughly 6.7% uploads rejected out of around 3,000 total uploads. We'll be able to compare numbers in a week to confirm that this has indeed improved the false positive rate.

234

Hello World,

following feedback we have received in the last few days, both from users and moderators, we are making some changes to clarify our ToS.

Before we get to the changes, we want to remind everyone that we are not a (US) free speech instance. We are not located in US, which means different laws apply. As written in our ToS, we're primarily subject to Dutch, Finnish and German laws. Additionally, it is our discretion to further limit discussion that we don't consider tolerable. There are plenty other websites out there hosted in US and promoting free speech on their platform. You should be aware that even free speech in US does not cover true threats of violence.

Having said that, we have seen a lot of comments removed referring to our ToS, which were not explicitly intended to be covered by our ToS. After discussion with some of our moderators we have determined there to be both an issue with the ambiguity of our ToS to some extent, but also lack of clarity on what we expect from our moderators.

We want to clarify that, when moderators believe certain parts of our ToS do not appropriately cover a specific situation, they are welcome to bring these issues up with our admin team for review, escalating the issue without taking action themselves when in doubt. We also allow for moderator discretion in a lot of cases, as we generally don't review each individual report or moderator action unless they're specifically brought to admin attention. This also means that content that may be permitted by ToS can at the same time be violating community rules and therefore result in moderator action. We have added a new section to our ToS to clarify what we expect from moderators.

We are generally aiming to avoid content organizing, glorifying or suggesting to harm people or animals, but we are limiting the scope of our ToS to build the minimum framework inside which we all can have discussions, leaving a broader area for moderators to decide what is and isn't allowed in the communities they oversee. We trust the moderators judgement and in cases where we see a gross disagreement between moderatos and admins' criteria we can have a conversation and reach an agreement, as in many cases the decision is case-specific and context matters.

We have previously asked moderators to remove content relating to jury nullification when this was suggested in context of murder or other violent crimes. Following a discussion in our team we want to clarify that we are no longer requesting moderators to remove content relating to jury nullification in the context of violent crimes when the crime in question already happened. We will still consider suggestions of jury nullification for crimes that have not (yet) happened as advocation for violence, which is violating our terms of service.

As always, if you stumble across content that appears to be violating our site or community rules, please use Lemmys report functionality. Especially when threads are very active, moderators will not be able to go through every single comment for review. Reporting content and providing accurate reasons for reports will help moderators deal with problematic content in a reasonable amount of time.

52

Hello World,

we've just updated our tooling that scans uploaded images for illegal content.

We don't expect this to cause any issues. If you do experience any issues with uploads or media in general please let us know.

This will hopefully improve the current situation where sometimes previews, thumbnails or entire images have gone missing after they were posted.

Uploads will now be scanned at the time they are created, which should result in immediate feedback when uploads are rejected.

[-] lwadmin@lemmy.world 43 points 1 month ago

You can call me Leo. Leo Wadmin.

142

Due to maintenance at our hosting provider Hetzner, there might be an outage of lemmy.world between 03:30 and 04:30 UTC on November 25th. (What's that in my timezone?)

Status can be found at https://status.lemmy.world

278
submitted 1 month ago* (last edited 4 weeks ago) by lwadmin@lemmy.world to c/lemmyworld@lemmy.world

We're aware of ongoing federation issues for activities being sent to us by lemmy.ml.

We're currently working on the issue, but we don't have an ETA right now.

Cloudflare is reporting 520 - Origin Error when lemmy.ml is trying to send us activities, but the requests don't seem to properly arrive on our proxy server. This is working fine for federation with all other instances so far, but we have seen a few more requests not related to activity sending that seem to occasionally report the same error.

~~Right now we're about 1.25 days behind lemmy.ml.~~

You can still manually resolve posts in lemmy.ml communities or comments by lemmy.ml users in our communities to make them show up here without waiting for federation, but this obviously is not something that will replace regular federation.

We'll update this post when there is any new information available.


Update 2024-11-19 17:19 UTC:

~~Federation is resumed and we're down to less than 5 hours lag, the remainder should be caught up soon.~~

The root cause is still not identified unfortunately.


Update 2024-11-23 00:24 UTC:

We've explored several different approaches to identify and/or mitigate the issue, which included replacing our primary load balancer with a new VM, updating HAproxy from the latest version packaged in Ubuntu 24.04 LTS to the latest upstream version, finding and removing a configuration option that may have prevented logging of certain errors, but we still haven't really made any progress other than ruling out various potential issues.

We're currently waiting for lemmy.ml admins to be available to reset federation failures at a time when we can start capturing some traffic to get more insights on the traffic that is hitting our load balancer, as the problem seems to be either between Cloudflare and our load balancer, or within the load balancer itself. Due to real life time constraints, we weren't able to find a suitable time this evening, we expect to be able to continue with this tomorrow during the day.

As of this update we're about 2.37 days behind lemmy.ml.

We are still not aware of similar issues on other instances.


Update 2024-11-25 12:29 UTC:

We have identified the underlying issue, where a backport for a bugfix resulting in crashes in certain circumstances was accidentally reverted when another backport was applied. We have applied this patch again and we're receiving activities from lemmy.ml again. It may take an hour or so to catch up, but this time we should reliably be getting there again. We're currently 4.77 days behind.

We still don't have an explanation why the logs were missing in HAproxy after going through Cloudflare, but this shouldn't cause any further federation issues.


Update 2024-11-25 14:31 UTC:

Federation has fully caught up again.

267
submitted 2 months ago by lwadmin@lemmy.world to c/lemmyworld@lemmy.world

Hello World,

we know one of the most important indicators of comment and post community interest is votes. Beyond that, reports help to regulate comments and posts that go beyond down-votes and into the rule-breaking territory.

Unfortunately, Lemmy's current reporting system has some shortcomings. Here are a few key technical points that everyone should be aware of:

  • Reports are NOT always visible to remote moderators. (See Full reports federation · Issue #4744 · LemmyNet/lemmy · GitHub)
  • Resolving reports on a remote instance does not federate if the content isn't removed.
  • Removal only resolves reports in some cases, e.g. individual content removal. Some people remove individual content before banning the user, especially when the ban is with content removal, as individual content removals resolve reports on all instances. Bans with the remove content checkbox selected will not currently resolve reports automatically.

Our moderators are also volunteers with other demands on their time: work gets busy, people take vacations, and life outside of Lemmy goes on. But that doesn't mean content that causes distress to users should stick around any longer than reasonable. Harmful content impacts both Lemmy.world users AND users of all other instances as well. We always strive to be good neighbors.

While the devs work on the technical issues, we can work to improve the human side of things. To help keep the reports queue manageable and help reports be resolved in a REASONABLE amount of time we will be enabling global notifications for all stale reports. Open reports older than 1 day will trigger a notification for 1 week every 2 days so as not to overwhelm community moderators endlessly. If a community moderator does not wish to receive these reports, they may simply block the bot, but we still expect them to address unresolved reports.

We expect that this feature will eventually be removed once Lemmy has a better reporting system in place.

As an aside, we recommend that all LW communities have at least two moderators so there is some redundancy in resolving reports themselves, and to reduce the workload on individual moderators. We know this can be harder for smaller communities, but this is just a recommendation.

We recommend that moderators have alt accounts on Lemmy.world to resolve reports that are from federated instances.

As always feedback is welcome. This is an active project which we really hope will benefit the community. Please feel free to reply to this thread with comments, improvements, and concerns.

Thanks!

FHF / LemmyWorld Admin team 💖

Additional mods

We recommend using the Photon front-end for adding mods to your community.

With the standard lemmy-ui, additional mods can only be appointed if the account has made a post or comment in the community.

Photon (https://p.lemmy.world) does not have this limitation. You can use the community sidebar, click Gear (settings) icon, then click the Team tab, and add any user as a mod.

520
submitted 3 months ago* (last edited 3 months ago) by lwadmin@lemmy.world to c/lemmyworld@lemmy.world

Intro

We would like to address some of the points that have been raised by some of our users (and by one of our communities here on Lemmy.World) on /c/vegan regarding a recent post concerning vegan diets for cats. We understand that the vegan community here on Lemmy.World is rightfully upset with what has happened. In the following paragraphs we will do our best to respond to the major points that we've gleaned from the threads linked here.

Links


Actions in question

Admin removing comments discussing vegan cat food in a community they did not moderate.

The comments have been restored.

The comments were removed for violating our instance rule against animal abuse (https://legal.lemmy.world/tos/#11-attacks-on-users). Rooki is a cat owner himself and he was convinced that it was scientific consensus that cats cannot survive on a vegan diet. This originally justified the removal.

Even if one of our admins does not agree with what is posted, unless the content violates instance rules it should not be removed. This was the original justification for action.

Removing some moderators of the vegan community

Removed moderators have been reinstated.

This was in the first place a failure of communication. It should have been clearly communicated towards the moderators why a certain action was taken (instance rules) and that the reversal of that action would not be considered (during the original incident).

The correct way forward in this case would have been an appeal to the admin team, which would have been handled by someone other than the admin initially acting on this.

We generally discuss high impact actions among team before acting on them. This should especially be the case when there is no strong urgency on the act performed. Since this was only a moderator removal and not a ban, this should have been discussed among the team prior to action.

Going forward we have agreed, as a team, to discuss such actions first, to help prevent future conflict

Posting their own opposing comment and elevating its visibility

Moderators' and admins' comments are flagged with flare, which is okay and by design on Lemmy. But their comments are not forced above the comments of other users for the purpose of arguing a point.

These comments were not elevated to appear before any other users comments.

In addition, Rooki has since revised his comments to be more subjective and less reactive.


Community Responses

The removed comments presented balanced views on vegan cat food, citing scientific research supporting its feasibility if done properly.

Presenting scientifically backed peer reviewed studies is 100% allowed, and encouraged. While we understand anyone can cherry pick studies, if a individual can find a large amount of evidence for their case, then by all accounts they are (in theory) technically correct.

That being said, using facts to bully others is not in good faith either. For example flooding threads with JSTOR links.

The topic is controversial but not clearly prohibited by site rules.

That is correct, at the time there was no violation of site wide rules.

Rooki's actions appear to prioritize his personal disagreement over following established moderation guidelines.

Please see the above regarding addressing moderator policy.


Conclusions

Regarding moderator actions

We will not be removing Rooki from his position as moderator, as we believe that this is a disproportionate response for a heat-of-the-moment response.

Everybody makes mistakes, and while we do try and hold the site admin staff to a higher standard, calling for folks resignation from volunteer positions over it would not fair to them. Rooki has given up 100's of hours of his free time to help both Lemmy.World, FHF and the Fediverse as a whole grown in far reaching ways. You don't immediately fire your staff when they make a bad judgment call.

While we understand that this may not be good enough for some users, we hope that they can be understanding that everyone, no matter the position, can make mistakes.

We've also added a new by-laws section detailing the course of action users should ideally take, when conflict arises. In the event that a user needs to go above the admin team, we've provided a secure link to the operations team (who the admin's report to, ultimately). See https://legal.lemmy.world/bylaws/#12-site-admin-issues-for-community-moderators for details.

TL;DR In the event of an admin action that is deemed unfair or overstepping, moderators can raise this with our operations team for an appeal/review.

Regarding censorship claims

Regarding the alleged censorship, comments were removed without a proper reason. This was out of line, and we will do our best to make sure that this does not happen again. We have updated our legal policy to reflect the new rules in place that bind both our user AND our moderation staff regarding removing comments and content. We WANT users to hold us accountable to the rules we've ALL agreed to follow, going forward. If members of the community find any of the rules we've set forth unreasonable, we promise to listen and adjust these rules where we can. Our terms of service is very much a living document, as any proper binding governing document should be.

Controversial topics can and should be discussed, as long as they are not causing risk of imminent physical harm. We are firm believers in the hippocratic oath of "do no harm".

We encourage users to also list pros and cons regarding controversial viewpoints to foster better discussion. Listing the cons of your viewpoint does not mean you are wrong or at fault, just that you are able to look at the issue from another perspective and aware of potential points of criticism.

While we want to allow our users to express themselves on our platform, we also do not want users to spread mis-information that risks causing direct physical harm to another individual, origination or property owned by the before mentioned. To echo the previous statement "do no harm".

To this end, we have updated our legal page to make this more clear. We already have provisions for attacking groups, threatening individuals and animal harm, this is a logical extension of this to both protect our users and to protect our staff from legal recourse and make it more clear to everyone. We feel this is a very reasonable compromise, and take these additional very seriously.

See Section 8 Misinformation

Sincerely,
FHF / LemmyWorld Operations Team


EDIT: Added org operations contact info

[-] lwadmin@lemmy.world 60 points 3 months ago

We will be releasing a separate post involving that incident in the next 24-48 hours, just getting final approval from the team.

540
submitted 3 months ago* (last edited 3 months ago) by lwadmin@lemmy.world to c/lemmyworld@lemmy.world

Hey all,

In light of recent events concerning one of our communities (/c/vegan), we (as a team) have spent the last week working on how to address better some concerns that had arisen between the moderators of that community and the site admin team. We always strive to find a balance between the free expression of communities hosted here and protecting users from potentially harmful content.

We as a team try to stick to a general rule of respect and consideration for the physical and mental well-being of our users when drafting new rules and revising existing ones. Furthermore, we've done our best to try to codify these core beliefs into the additions to the ToS and a new by-laws section.

ToS Additions

That being said, we will be adding a new section to our “terms of service” concerning misinformation. While we do try to be as exact as reasonably able, we also understand that rules can be up to interpretation as well. This is a living document, and users are free to respectfully disagree. We as site admins will do our best to consider the recommendations of all users regarding potentially revising any rules.

Regarding misinformation, we've tried our best to capture these main ideas, which we believe are very reasonable:

  • Users are encouraged to post information they believe is true and helpful.
  • We recommend users conduct thorough research using reputable scientific sources.
  • When in doubt, a policy of “Do No Harm”, based on the Hippocratic Oath, is a good compass on what is okay to post.
  • Health-related information should ideally be from peer-reviewed, reproducible scientific studies.
    • Single studies may be valid, but often provide inadequate sample sizes for health-related advice.
    • Non-peer-reviewed studies by individuals are not considered safe for health matters.

We reserve the right to remove information that could cause imminent physical harm to any living being. This includes topics like conversion therapy, unhealthy diets, and dangerous medical procedures. Information that could result in imminent physical harm to property or other living beings may also be removed.

We know some folks who are free speech absolutists may disagree with this stance, but we need to look out for both the individuals who use this site and for the site itself.

By-laws Addition

We've also added a new by-laws section as well as a result of this incident. This new section is to better codify the course of action that should be taken by site and community moderators when resolving conflict on the site, and also how to deal with dormant communities.

This new section provides also provides a course of action for resolving conflict with site admin staff, should it arise. We want both the users and moderators here to feel like they have a voice that is heard, and essentially a contact point that they can feel safe going to, to “talk to the manager” type situation, more or less a new Lemmy.World HR department that we've created as a result of what has happened over the last week.

Please feel free to raise any questions in this thread. We encourage everyone to please take the time to read over these new additions detailing YOUR rights and how we hope to better protect everyone here.

https://legal.lemmy.world/tos/#80-misinformation

https://legal.lemmy.world/bylaws/

Sincerely,

FHF / LemmyWorld Operations Team


EDIT:

We will be releasing a separate post regarding the moderation incident in the next 24-48 hours, just getting final approval from the team.

EDIT 2 (2024-08-31):

We've posted a response, sorry for the delay.

👉 https://lemmy.world/post/19264848 👈

102
submitted 4 months ago* (last edited 4 months ago) by lwadmin@lemmy.world to c/lemmyworld@lemmy.world

Hello Lemmy.World users,

yesterday we had an incident where human error lead to accidental removal of 2FA for all Lemmy.World users.
Until the mistake had been corrected and the original state had been restored where possible, 2FA was not enforced for any logins, even if the user enabled 2FA prior to this.

Timeline (all times in UTC):

Approximately at 2024-08-09 09:30 MFA had been removed for all users due to a mistake when MFA was intended to be reset for an individual user.
Around 2024-08-09 22:10 we became aware of the issue due to a user reporting that they were no longer prompted for 2FA on login. We immediately started an investigation to determine the root cause for this and discovered the mistake that was done earlier. Once the root cause was identified we started working on restoring the original state.
At 2024-08-10 01:10:00 2FA had been reactivated for all users that previously had 2FA enabled and hadn't reactivated it on their own since. After additional investigation to identify affected users with 2FA that had logged in during this period, we have sent out individual messages with information about logins to their accounts during this period.

Although less than 2% of our active users have 2FA enabled, we are committed to keep our user accounts as secure as reasonably possible, and we will review our processes for resetting 2FA for individual users going forward to reduce the risk of this happening again in the future.
Fortunately our robust backup database backups allowed us to revert the exact state we had just before this change happened, allowing us to restore the original 2FA secrets for all affected users.
During this period, we have observed a total of 824 logins. 18 of these logins were done by 14 users who had 2FA disabled. Notifications to all affected users who we observed logins for during this period have been sent shortly after publishing this post.
2 users had reactivated their 2FA already on their own, so we have not reverted their 2FA to the previous state.

If you have any concerns that your account may have been compromised during this period due to the lack of 2FA enforcement feel free to reach out to us via email to info@lemmy.world or via PM to @lwadmin@lemmy.world.

378
submitted 5 months ago by lwadmin@lemmy.world to c/lemmyworld@lemmy.world

Hey everyone,

We wanted to address a common situation we've seen over the years with some large online forums, reaching out to moderators for questions and concerns.

Typically a good first line of contact is to use the Report button for any posts that need moderator attention. This will open a report for both the community moderators and the site admins. This is normally enough for most larger communities with active mod staff. You may also reach out to your community mods, they're here to help. 💖

We do encourage our communities to take care of their own and want them to always feel empowered to run themselves as they see fit (provided they are following the global site rules). Issues can arise for some smaller communities with only one moderator or situations where the local mod staff is inactive for one reason or another for an extended time.

In these situations, we've seen folks email our ticket system, and while we will try and get back to them, this ticket system tends to get end-user support cases mixed in (as well as a low stream of spammers trying to buy ads 🙄).

What we do want, is to provide a more direct line of contact to the site-admin team, so when ANYONE needs to reach out to us when something is amiss, they can get a timely response. While the team all are on a few different platforms, this tends to create a fragmented approach of not just the WHO, but the HOW to contact us.

While we as a team use our internal chat platform, we want to find a happy middle ground between @'ing us each directly in random chats, and issues sitting for too long in our ticket system.

This new system would be for PRIVATE Messages as a supplement to our Lemmy.world Support community, which we do monitor very closely, and our ticket email info@lemmy.world, both for more technical issues. Think of it like the bat signal 🦇🎆!

To this end, we've created two methods for reaching the site-mod team directly.

We hope that between both an anonymous platform approach AND a secure email-based approach, we can keep everyone happy 💝

In closing, please, always try and reach out to your community mods FIRST, they usually will know best how to handle a difficult or tricky situation, and use this only when you feel you need to reach out now.


  • The FHF/Lemmy.World Admin team 💓
93
submitted 5 months ago* (last edited 5 months ago) by lwadmin@lemmy.world to c/lemmyworld@lemmy.world

Hey all, We are planning to do a Docker upgrade tomorrow 6/30 Sunday, around 1800 UTC. The whole site will be down during the upgrade, but should only be down for a short while. Just wanted to give everyone a heads up. Thanks!

  • @jelloeater85@lemmy.world
[-] lwadmin@lemmy.world 139 points 1 year ago

This is a volunteer platform, and as such no one is paid. Applicants may include their availability info and be considered accordingly.

[-] lwadmin@lemmy.world 43 points 1 year ago* (last edited 1 year ago)

No, even when the option comes for users to block whole instances we will still defederate with instances we do not want to moderate the content from. But we also always reserve the right to re-federate with any instance if the concerns are resolved.

And as per https://lemmy.world/legal : We are not a free speech zone. This Code of Conduct lays out the expected standards of conduct and behavior. Users may not say or post anything that violates these rules, and all participants are required to follow this code. If you disagree with this code, you are welcome to keep looking for other Lemmy instances. Here’s a list of all public instances.

[-] lwadmin@lemmy.world 115 points 1 year ago

A tolerable level we can handle by moderation. And when even the admins join in it becomes clear there is a big incompatibility and cultural difference.

But you probably meant something else, right?

[-] lwadmin@lemmy.world 92 points 1 year ago

Most of their communities were blocked since months, that's why you didn't see much of them.

[-] lwadmin@lemmy.world 57 points 1 year ago

We are well aware of what's going on with kbin and the development team. That's why we don't defederate because we have hope that they will fix things soon.

[-] lwadmin@lemmy.world 209 points 1 year ago

@Striker@lemmy.world this is not your fault. You stepped up when we asked you to and actively reached out for help getting the community moderated. But even with extra moderators this can not be stopped. Lemmy needs better moderation tools.

[-] lwadmin@lemmy.world 116 points 1 year ago

Thank you for the kindness!

[-] lwadmin@lemmy.world 80 points 1 year ago* (last edited 1 year ago)

You need to hover over the status bar to see if there is any down time for that day. We can enable it to log incidents every time there is a burp, but we are still tuning alerts as we only have it create a incident when we ACK it in PagerDuty. You can always check the dashboard for up to the minute stats, as well as https://lemmy-status.org/endpoints/_lemmy-world We'll add this info to make things clearer <3

EDIT: Added more info to our status page, thanks for the feedback Machefi!

EDIT2: Also the missing data is due to us removing and adding more specific monitors for the different infra services.

[-] lwadmin@lemmy.world 121 points 1 year ago* (last edited 1 year ago)

This was a misunderstanding from one of the team members. It has since been discussed and will not happen again. Lemmy.World and this announcement community is our primary platform,

[-] lwadmin@lemmy.world 76 points 1 year ago

Doesn't matter if they are hosted here or not. The way federation works is that threads on different instances are cached locally.

We have NO issues with the people at db0 - we are just looking out for ourselves in a 'better safe than sorry' fashion while we find out more. As mentioned in the OP we would like to unblock as soon as we know we can not get in any legal trouble.

view more: next ›

lwadmin

joined 2 years ago
MODERATOR OF