[-] admin@lm.boing.icu 1 points 11 months ago

Will try this, thanks for the tip

[-] admin@lm.boing.icu 3 points 11 months ago

Thanks, good point. Didn't know about that risk

16
submitted 11 months ago by admin@lm.boing.icu to c/lemmy@lemmy.ml

I'm using my own instance. Is there a way to block the posts coming from other instances that are below a certain threshold? For example -3 upvotes. I don't need to see them I trust the moderators of other instances to handle their own posts responsibly. As it is today half my Homescreen is full of -40 voted posts that might have been deleted on their home instance already. But they get downloaded to me regardless.

[-] admin@lm.boing.icu 1 points 1 year ago

Here is what I'm using atm. Is there a better way to do this? I'm still learning K8S :)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: sonarr-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 250Mi
***
[....]
volumes:
      - name: config
        persistentVolumeClaim:
          claimName: sonarr-pvc
[-] admin@lm.boing.icu 1 points 1 year ago

Yeah I'm using Longhorn. Might be that I have set it up wrong, but didn't seem to have helped with the DB corruption issue.

[-] admin@lm.boing.icu 8 points 1 year ago

Basically this. I have my home stuff running in a K3S cluster, and I had to restore my Sonarr volume several times because the SQLite DB has corrupted. Transitioning to Postgres should solve this issue, and I already have quite a few other stuff in it, for example Radarr and Prowlarr

33

Thought I would let you all know in case you have missed it. A few days ago Postgres support was finally merged into Sonarr dev branch (meaning 4.x version). I have already transitioned to it, so far it runs without issue

You can mostly follow the same instructions as for Radarr from here: https://wiki.servarr.com/radarr/postgres-setup

I used the following temporary docker container to do the conversion (obviously replace stuff you need to):

docker run --rm -v Route\to\sonarr.db:/sonarr.db --network=host dimitri/pgloader pgloader --debug --verbose --with "quote identifiers" --with "data only" "sqlite://sonarr.db" "postgresql://user:pwd@DB-IP/sonarr-main"

When it completed the run, it outputs a kind of table that shows if there were any errors. In my case there were 2 tables (cant remember which ones anymore) that couldn't be inserted, so I edited those manually afterwards, so it matches the ones in the original DB.

[-] admin@lm.boing.icu 4 points 1 year ago

I got the same Obsidian+Syncthing setup atm, just haven't really tried to use it for writing yet. Wanted to see what else others use that may trump it :)

44
Software for writing (lm.boing.icu)

For those who do write novels, books etc. What software do you use? What format? FOSS or proprietary?

1
submitted 1 year ago by admin@lm.boing.icu to c/lemmy@lemmy.ml

Hi all,

I'm having an issue with my Lemmy on K8S that I selfhost. No matter what I do, Pictrs doesn't want to use my Minio instance. I even dumped the env variables inside the pod, and those seem to be like described in the documentation. Any ideas?

kind: ConfigMap
metadata:
  name: pictrs-config
  namespace: lemmy
data:
  PICTRS__STORE__TYPE: object_storage
  PICTRS__STORE__ENDPOINT: http://192.168.1.51:9000
  PICTRS__STORE__USE_PATH_STYLE: "true"
  PICTRS__STORE__BUCKET_NAME: pict-rs
  PICTRS__STORE__REGION: minio
  PICTRS__MEDIA__VIDEO_CODEC: vp9
  PICTRS__MEDIA__GIF__MAX_WIDTH: "256"
  PICTRS__MEDIA__GIF__MAX_HEIGHT: "256"
  PICTRS__MEDIA__GIF__MAX_AREA: "65536"
  PICTRS__MEDIA__GIF__MAX_FRAME_COUNT: "400"
***
apiVersion: v1
kind: Secret
metadata:
  name: pictrs-secret
  namespace: lemmy
type: Opaque
stringData: 
  PICTRS__STORE__ACCESS_KEY: SOMEUSERNAME
  PICTRS__STORE__SECRET_KEY: SOMEKEY
  PICTRS__API_KEY: SOMESECRETAPIKEY
***
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pictrs
  namespace: lemmy
spec:
  selector:
    matchLabels:
      app: pictrs
  template:
    metadata:
      labels:
        app: pictrs
    spec:
      containers:
      - name: pictrs
        image: asonix/pictrs
        envFrom:
        - configMapRef:
            name: pictrs-config
        - secretRef:
            name: pictrs-secret
        volumeMounts:
        - name: root
          mountPath: "/mnt"
      volumes:
        - name: root
          emptyDir: {}
***
apiVersion: v1
kind: Service
metadata:
  name: pictrs-service
  namespace: lemmy
spec:
  selector:
    app: pictrs
  ports:
  - port: 80
    targetPort: 8080
  type: ClusterIP
-1
Lemmy on K8S (github.com)

So here are the files I have cobbled together in order to deploy Lemmy on my own cluster at home. I know there are helm charts in the work, but this might help someone else who cannot wait just like me :)

admin

joined 1 year ago