Now Reading
Perlis Tiger Cub Isn’t Real, Clue Hiding In Plain Sight

Perlis Tiger Cub Isn’t Real, Clue Hiding In Plain Sight

An entrepreneur’s Threads post about a tiger cub allegedly photographed from a bedroom window has reignited conversations on how AI-generated images can fool social media users.

Subscribe to our FREE Newsletter, or Telegram and WhatsApp channels for the latest stories and updates.


A recent social media post by a local ice-cream entrepreneur has highlighted how artificial intelligence (AI) generated images can easily mislead social media users.

The entrepreneur shared a photo of what appeared to be a tiger cub outside her house in Perlis.

She shared the post on Threads and Facebook, claiming that her son had snapped the picture from his bedroom window at around 4pm.

Screenshot of the Threads post.

When a user asked her if it was AI, she said she didn’t know and that her son had shared the photo “he took” with her.

The post quickly gained traction, with many users telling her the photo was clearly AI.

Why were they so confident? In sharing the photo, the entrepreneur missed out a crucial detail, the Gemini logo at the bottom right corner of the image, indicating that it was generated using Google’s Gemini AI tool.

The entrepreneur has yet to react to this, with the post still up on both Threads and Facebook.

AI-generated visuals are becoming increasingly realistic, making it harder to differentiate them from genuine photographs.

Here are some ways to identify AI-generated images:

Check for watermarks or logos

AI-generated images often carry watermarks or logos such as “Gemini,” “DALL·E,” or “Midjourney,” usually placed subtly in a corner of the image.

Examine the details closely

AI still struggles with fine details. Odd-looking hands, inconsistent lighting, unnatural textures, or distorted features can be tell-tale signs.

Consider the plausibility

If an image depicts something unusual, such as a tiger cub roaming a residential compound, it’s worth questioning its authenticity and checking whether credible sources have reported the incident.

Read the caption carefully

Posts that rely on second-hand accounts or vague explanations can sometimes be used to lend credibility to fabricated content.

Use reverse image search tools

Reverse image searches can help determine whether an image has been previously generated, reused, or linked to AI tools.

As AI-generated content becomes more widespread, users are urged to remain cautious and verify information before sharing, particularly when images could cause unnecessary alarm or spread misinformation.


Share your thoughts with us via TRP’s FacebookTwitterInstagram, or Threads.

Get more stories like this to your inbox by signing up for our newsletter.

© 2024 The Rakyat Post. All Rights Reserved. Owned by 3rd Wave Media Sdn Bhd