Content Guidelines

Topics Covered

  • AI tool reviews and comparisons
  • Practical AI implementation guides
  • Industry trends and insights from Berlin AI community
  • Real-world use cases and tutorials

Content Types

  • In-depth tool reviews with standardized testing
  • Comparison articles
  • Educational content for AI beginners
  • Community insights from AI Enthusiasts Berlin meetup

Guest Posts and Link Insertion Policy

We do not accept guest posts to maintain editorial integrity and consistency. We also do not accept paid link insertions into existing blog posts or other content. All links within our content serve our readers' interests and editorial purposes only.

Quality & Credibility Standards

Research & Fact-Checking

Testing Standards:

  • Every tool personally tested by Lili Marocsik or expert contributors
  • Standardized prompts for consistent comparison
  • Real-world testing scenarios, not theoretical assessments
  • Default settings evaluation to avoid bias

Standard Test Prompts:

  • Video Generators: "Create a video of 2 people looking at each other and shaking hands, one being an AI robot, the other a woman. The background is space and the mood is friendly."
  • Image Generators: Three-prompt system testing baseline performance, creativity, and detail following
  • Presentation AI: Open-ended capability tests

For detailed information about our testing approach, please see our Testing Methodology page at https://www.aitoolssme.com/methodology-explained.

Accuracy & Accountability

  • Updates are published when tool features change
  • Source attribution for all factual claims
  • Strict review process for error correction

AI Content Integration

We use AI as a supporting tool to enhance our content creation process while maintaining human oversight, originality, and editorial integrity. All strategic decisions, insights, and opinions originate from human expertise.

AI Disclosure: We use AI to generate text, but the insights come from us only.

Quality Control: All AI-assisted content is reviewed and fact-checked by human editors to ensure accuracy, relevance, and alignment with our editorial standards.

Testing Integrity: No AI simulation of results—only real, hands-on testing data.

What We Use AI For:

  • Text enhancement and formulation
  • Keyword optimization
  • Factual research (prices, features, support options)
  • Visual content creation for blogs and social media
  • Video content creation

What We DO NOT Use AI For:

  • Summarizing other websites' content or insights
  • Writing complete blog posts
  • Simulating test results

For complete details about our AI usage policies, tools, and transparency standards, please see our page "How We Use AI"

Editorial Process

Writing Standards

Voice & Tone: Conversational, user-friendly, non-technical approach from a marketing perspective, in simple English (non-natives should easily comprehend it too).

Style: Engaging and human.

Language: Available in English.

SEO: Optimized for discoverability while maintaining readability.

Content Scheduling

Review Frequency:

  • Image generators: Monthly reviews
  • Video generators: Monthly reviews
  • Presentation AI: Monthly reviews
  • Other AI tools: Every 2 months
  • New relevant and important tools: Immediate review when available

We balance introducing new tools with maintaining coverage depth. For our full review schedule and criteria, visit our Testing Methodology page at https://www.aitoolssme.com/methodology-explained.

Editorial Decision-Making

Publishing Authority

Lili Marocsik, Chief Publisher, makes all final decisions on what gets published, which tools will be tested, which partnerships are entered, and which contributors can review tools for AI Tools SME.

Content Acceptance Criteria

Content must meet the following standards:

  • Truthfulness: All information must be accurate and verified through hands-on testing
  • Originality: Insights must be unique and self-developed
  • Balance: Reviews cannot be unreasonably positive or negative; every review must present both strengths and limitations

Editorial Team & Contributors

Team Structure

Chief Publisher: Lili Marocsik makes all editorial decisions regarding content, tool selection, partnerships, and contributor approval.

Expert Contributors: Contributors must have a proven track record in their specialized field, demonstrated through either extensive years of using similar tools or professional expertise in relevant areas. We typically approach contributors directly based on their documented expertise.

Quality Control: All content undergoes review by Lili Marocsik to ensure consistency with editorial standards and testing methodology.

Conflicts of Interest Management

Paid Review Policy

AI tool companies can pay us to review their tool, which we otherwise would not have reviewed or added to our comparisons or AI tools lists. However, payment for a review does not guarantee a positive review.

We emphasize this policy on our "Submit Your Tool" page and during all client communication. We inform potential clients upfront that every good review consists of both positive and negative points. Our readers want to know about limitations as well as strengths, and AI Tools SME is first and foremost accountable to its readers.

Review Independence

All reviews—whether paid or organic—are independently conducted and hand-tested according to our standard methodology. Financial relationships do not influence our evaluation criteria, testing process, or editorial judgments.

Plagiarism and Originality

Original Content Commitment

All AI tools within our comparisons are hand-tested by either Lili Marocsik herself or contributors who are experts within niche fields. All insights about tools and ideas about AI tool categories or their future are our own.

We do not use AI to summarize other pages' content. As a result, we maintain a limited but thoroughly vetted selection of AI tools on our website, especially compared to our competitors. This approach ensures that every tool listing represents genuine, firsthand evaluation.

Post-Publication Policies

Content Updates and Revisions

As we constantly re-test tools—especially in the areas of image, video, and presentation generators—we refresh content regularly within comparisons, blog content, detailed reviews, and FAQs.

Within Comparisons:

  • We add our newest insights and include a test date indicating when we formed these insights
  • Old content is deleted and tools that are no longer relevant to that category are removed
  • AI tool entries typically remain in our database permanently

Blog Content and Detailed Reviews:

  • Content is only deleted if information is clearly outdated because a newer review with the most up-to-date features has been tested and the old content no longer applies
  • In such cases, a redirect from the old URL to the new review is created

Error Correction

Significant errors are handled through a strict review process. If we notice we've made a mistake, we review all generated material. We first trust our own assessment and then consider feedback from clients or users.

If we determine that we've made an error—for example, by adding incorrect price structures—we review and correct them as soon as possible. We are committed to ensuring that our website is at all times up-to-date and truthful.

Citation and Attribution Standards

Source Citation

When we cite a source in our blog post, comparison, or detailed review, we ensure to name the quote and link to the page source.

Tool Website Linking

  • Tool links to tool websites, especially affiliate links, include a "sponsored" reference for full transparency
  • Within comparisons, the standard linking reference is "nofollow" and "noopener"
  • For the AI tools list, links use "dofollow" attributes

Screenshot and Image Usage

In almost all cases, the screenshots we make of a tool dashboard are created by us during the review process. Only in rare cases do we take screenshots from the AI tool company themselves—for example, if we missed taking a screenshot and the credits within a platform ran out.

We also use AI-generated images, especially for the image generator comparison, to showcase the quality of the generator. All images are generated by us and may be reused with clear attribution to aitoolssme.com by other users.

Review Prioritization and Timeline

Tool Selection Criteria

We prioritize tools that we hear and see frequently, whether on Google search pages, at our AI meetups, or on social media. We are also more inclined to test tools when we learn about special or new features.

We refrain from reviewing tools that are:

  • Very immature (messy dashboard, complicated signup)
  • Malicious or misleading

Immediate Review Criteria

Immediate reviews are only conducted for highly anticipated AI tools from established players, particularly those with innovative character. For example, when Veo 3 was released as the first video generator with audio capabilities, it qualified as an immediate review due to its groundbreaking features.

For our complete review schedule by category, please visit our Testing Methodology page at https://www.aitoolssme.com/methodology-explained.

Data and Evidence Standards

Testing Documentation

We document our testing results within our Airtable database, where we also note when the last test has taken place.

Data Management

We currently do not archive test data but rather overwrite it. Because AI tools constantly evolve and improve, we prioritize showing current developments rather than holding on to outdated information.

Reproducibility Standards

We always use the same review dashboards, which require us to enter consistent input data such as:

  • Ratings from 1-5 for tool performance
  • Setup/onboarding experience
  • User experience

Additional factors depend on the AI tool category and relevance, such as AI voice quality or design capabilities.