Friday, July 12, 2024

AI content use in tea marketing; message discussion with AI

 

I haven't avoided this topic, but I don't have much to say to about it, so I've never mentioned it here.  People are clear on why AI text content and pictures are interesting and positive, and also what the down-sides are, that kids can use it to write their paper instead of them doing so, while learning to write, and that it displaces artists, or people creating marketing content, and so on.


A recent post in a main Facebook group, Gong Fu Cha, drew criticism for being AI generated.  It was this:


๐˜๐ข๐ฑ๐ข๐ง๐  ๐๐ฎ๐ซ๐ฉ๐ฅ๐ž ๐’๐š๐ง๐ ๐“๐ž๐š๐ฉ๐จ๐ญ
ๅฎœๅ…ด็ดซ็ ‚ๅฃถ
History
The Yixing purple sand teapot, also known as Zisha teapot, has a rich history that dates back to the Song Dynasty (960-1279 AD) in China. The clay used for these teapots comes from the region around Yixing in Jiangsu province. This area is renowned for its unique type of clay called "zisha" or "purple sand," which can also be red or green.
During the Ming Dynasty (1368-1644), Yixing teapots gained significant popularity, and many skilled artisans emerged, elevating the craftsmanship to an art form. By the Qing Dynasty (1644-1912), Yixing teapots were widely regarded as essential items for tea connoisseurs.
Reasons to Use a Yixing Teapot
1. Porosity: The clay used in Yixing teapots is slightly porous, which allows the teapot to "breathe." This enhances the flavor and aroma of the tea over time, as the pot absorbs the essence of the tea brewed in it.
2. Heat Retention: Yixing clay has excellent heat retention properties, which helps maintain a consistent temperature during brewing, enhancing the extraction of flavors.
3. Patina: Over time, a patina develops on the inside of the teapot, further enhancing the taste of the tea. This makes each pot unique to its owner and the teas brewed within it.
4. Aesthetic Appeal: Yixing teapots are highly valued for their artistic and aesthetic qualities. Each pot is often handcrafted and can be an exquisite piece of art, reflecting the skill and creativity of the artisan.

The Yixing purple sand teapot is not just a vessel for brewing tea but a symbol of Chinese tea culture, artistry, and tradition. Its unique properties and the skill required to create them make Yixing teapots highly prized among tea enthusiasts and collectors worldwide.





It seems fine to me.  Or is it?  Is there a reason why a program writing this kind of content instead of a human discussing this background is problematic?  I'll get to all that.  Is the picture itself also AI generated?  Probably, but I can't tell for sure.  This one seems to be, from the next day:





Usually I have the next half a dozen points in mind, almost immediately mapped out, based on how I've arranged related ideas in the past.  Not this time.  It doesn't help that I don't care about clay teaware at all.  It's helpful for me to narrow tea related interest, to keep the scope manageable, and that has been a good dividing line.  

I own three clay pots for brewing tea, two of which I bought in Taiwan, and one of which my wife gave me from her father owning it, who hasn't been alive for the last 35 years.  I've not even fully seasoned the two that I bought about 6 years ago; I use gaiwans.  I know very little about clay teapots, and it works out better not to learn more.  Then someday I'll probably get to it.


Then the next picture relates to tea itself.  Someone mentioned in a comment that the tea cake looks like pu'er pudding, not the actual compressed tea.  Tea cakes don't really tend to look exactly like that; the material form and texture is ok but the shape isn't.  The text content is probably AI material too, for both.  So again, what is the problem?  Next bots could be coordinating this AI content generation and posting it themselves; how would that be a further problem?


It ties to another concern I've had recently about clearly fake group request profiles, that I tend to delete from the larger Facebook group I admin for (International Tea Talk).  Why has there been a wave of new, clearly fake profile requests?  Why were they not adjusted slightly to not be so obvious about not being actual people?  Probably within months those changes will occur; they'll update their process.  

They are easy to spot due to being created in the past two weeks or so, all with a single picture (which you have to click through to recognize, that the profile contains one picture), all based on background details that don't add up (from Ukraine, but living elsewhere, educated at Harvard, etc.).  Any one of those details is reasonable, but the sets don't seem to make sense together.

It all seems to add up to a problematic shift in how we are going to be experiencing social media, very soon.  Fake profiles will be much more common than they are now, and harder to spot, and they'll be able to create and post content that won't be as obvious as those two posts were.  

Someone posting one of those two mentioned that they prefer to use AI content to not steal random photos online from others, which works in one sense.  But that would be all the less easy to spot if content was presented as created by a person instead, as long as they managed where they farmed the content carefully.  Or not carefully at all, and the programs could just track which sources content had been taken from that were flagged and deleted later.  Of course if they steal from a main vendor, or use high-profile media content, or FB group post content, that's going to get noticed, but they wouldn't even need to use human intuition to filter that, they could just go by trial and error. 

The initial obvious problem isn't the main problem, that gradually AI will replace human specialist knowledge, and even interaction.  In the past low effort drop-shipping sort of vending startups were easy to spot, and they came and went fast.  It's not so much the form that is a problem, it's that you almost certainly would be buying tea from someone who knows very little about the subject, and is probably more concerned with streamlining and automated marketing (including content creation), ordering, and fulfilment process than with what they are selling.

Later on websites, sales portals, online background content, and interaction will all be much easier to generate relatively automatically.  We are already seeing more sophisticated vending materials that are obviously AI generated.  Later on it won't be as obvious.

It's the next step after that seems to be a concern.  The online interaction in relation to this automatically generated content will be supported by more sophisticated bots.  It's easy to spot now when online marketing posts are bolstered by positive comments from fake profiles.  The dots seem easy enough to connect, but they won't always be.  Later on an ad could lead someone to what seems to be developed content related to in-depth specialist exploration of tea, with online materials documenting a supportive and interactive customer base that reinforces how good that source really is.  And it all could be fake. 

Lets consider a rather unusual example case that relates to just how far this kind of thing could go.


Making friends with AI in social media


I think I recently discussed a range of different themes with a pilot program AI profile, earlier this year.  That's still only a high probability, because in some ways that "person" seemed real.  I suspect that's because input was being mixed from what was created by a human and then re-organized and managed by a program.

Maybe the context doesn't matter, but I can include that.  It was on Quora, a question and answer site that I had been more active in over the past, and still frequent.  I've created two Quora Spaces there, one about tea (Specialty Tea) and general cultural themes related to different national cultures (About Foreign Cultures), and have answered lots and lots of questions.  It seems as well to not flag the profile or add details that effectively do so, but the "shared interest" related to one of a dozen or so odd running themes I've written about here over the last year or two, but not tea.

I think I commented on something posted, then that "person" responded, then reached out about it through message, and we talked from there.  How would it not be obvious if it was really a person or not?  I think a lot of the starting point content was human generated, that initial discussion was created around what one would normally say about themselves, or respond to standard questions or starting points.  


Then there was a bit of limited continuity related to discussion flow; they didn't talk by messages as a person would, closely linking discussion themes.  This was explained away in terms of a contextual theme.  One of the first AI discussion programs, not so long ago, masked that the "person" was really a program by making them a teenager, which would explain away limited communication ability and inconsistency.  It was something like that, established as a base context early on, just not that.

It seemed like the mix of blending human created input with use and message functions by a program really worked.  It would be possible to have a person write out a number of responses or longer passages that tell a really compelling personal story, just a fictional one.  Written in the right way a number of content passages could have great depth, and really link together, then with most discussion content not holding up to that level.

It wouldn't even need to be fictional; there is no reason why I couldn't convert my own perspective, background, and life experiences into a relatable, convincing AI language model program, designed to discuss things with others.  It would help if I built in a relatively severe personal limitation, to mask inconsistencies and errors that would occur.  

It's not the one that probable-program I talked to used, but "coming clean" about experiencing mental health issues in the form of depression, anxiety, or manic episodes would work.  Then lapses in answering or discussing things consistently would be understandable.  The empathy of the person "it" talks to would kick in, and the person in discussion with it would be less likely to call out the program on not making complete sense, or forgetting things.

You could even have the program blame people when they repeated themes, for being repetitive, or forgetting points already made.  A little would go a long way with that; the more unpleasant a discussion experience would be the less likely it might be to continue.  This was actually an approach used in that discussion, cross-referencing repetition.


In my own case it would work to develop themes and discussion points related to my own interests, about running, or tea, and so on.  Getting a program to interact in the form of a normal conversation would be very difficult.  Chat GPT can mirror content creation very well now, but the flow of conversation is something else.  Any program model would be quite rough to begin with.  

How would you keep making adjustments to that?  You would create a profile on somewhere like Quora and practice.  Then you would watch out for when the function didn't work, I suppose especially related to when someone finally called you out on being a program.  After some discussion that "person" raised the issue of others accusing it / them of sounding like an AI, in kind of a cool twist, seemingly fishing for feedback.  

Or maybe it was a person, right?  Maybe the personal limitation they described makes them sound exactly like an AI, with the same kind of odd speech patterns, inconsistency, and range and discussion continuity limitations.

All of this is a lot more sophisticated than a bot posting about liking a tea would need to be, to create fake and misleading marketing content, identified as discussion instead of first-person product marketing.  But people are watching for that sort of thing already (and to a lesser extent moderation programs are); that's the problem flagged by these posts, in most of the comments responding.  Those people used AI generated images and some AI generated text content, essentially presented as what it was, and others immediately questioned if this was allowed under group rules.  

It's not prohibited in that group yet, as far as I know, because it just started appearing.  Then it's a bit scary that people are already populating these groups with hundreds if not thousands of fake members, profiles set up for some yet unknown purpose.  In the past it would've related to farming "likes," and it's probably partly that, but who knows what else is intended, or will become possible in the next year or two.


Another example; a scammer profile shows how far and fast this can develop


Someone just posted in a group related to Hawaii, where I live part of the time, asking a question, about surfing.  It looked odd, how they worded it, but their name, profile picture, and the question all looked pretty legit, until you looked deeper.  Then it was all clearly fake.  People commented that they were a scammer, but even before that I glanced at the profile to see why their comments seemed odd, worded unnaturally (which can just relate to someone using English as a second language).  I would assume the people flagging the account as a scammer related to getting direct messages, but that wasn't completely clear.

The pictures were of a Ukrainian model, with a very American name (along the line of Rose Smith, just not that).  One tell was the "person" being from California City, California.  That's a real place, with a population of 15,000, but foreigner scammers tend to mix things that make sense with what doesn't, like assuming that a city named after a state would be a conventional place to be from, without checking that it's really a very small town.  

Then even at a glance other parts of their profile didn't add up; an earlier feed post was about working as a runway model, and another related to being from Ukraine, both of which were not listed on the background.  The person was listed as a clothing designer at The Gap, linking to a Gap profile that had a few hundred likes, with one picture posted, so not "that" Gap.  A related website link was broken, pulling up a Chinese language notice that the site wasn't actively supported.

The rest is more about where I'm going with this; the profile was created in 2009, with those pictures uploaded in 2019 and 20.

Everyone already knows that there are a lot of fake accounts on all social media platforms, there to be used for farming likes, purchasing followers or friends, or for whatever other purposes.  For one to be that old it seems likely that the people using it, probably scammers, hacked into either an old and inactive account or took over an active one, which someone then gave up on using.  Posting stolen content years in advance related to laying a foundation for using the accounts.  They probably have hundreds set aside for such use, so they can initiate all sorts of related discussions and schemes, and then lose nothing when Facebook moderation shuts them down.

So far that's all pretty standard stuff.  When we see marketing content draw enthusiastic feedback from lots of profiles it looks like that's what this is, when people comment on how great a product looks, or they post linked mentions to notify their "friends" about it.

Once AI gets a little more sophisticated this will open up brand new forms of "sock puppeting," using secondary, alternate profiles to support points made in posts.  It seems so dodgy that it's hard to imagine tea vendors using this approach.  If someone is selling some odd drop-shipped product maybe they would go this route, paying for bot profiles to post lots discussion feedback and testimonials about how an off-brand shoe design or pair of pants is so unique and groundbreaking.  But for tea?  We aren't there yet.

I have tried teas from sources that seem to make a start towards that though, from people who know nothing about tea, using cut-and-pasted website marketing content, selling very moderate quality product versions, positioned somewhat randomly in relation to what the tea really is.  Once all that gets pushed a couple of steps forward the sites will be much cleaner, the text and image content much better, and packaging and vending forms more developed.  From day one it could include ample testimonial input, all made up, with social media posts populated with feedback from many satisfied customers.  

Bots could become much more adept at working backwards to navigate group rules; it wouldn't take much development for them to be better at that than most vendors now are.  You would think that a Darjeeling producer could write about issues or background related to Darjeeling, to post that as discussion, only implying that they can also sell related teas, but it's not like that now.  It's starting to be, but AI based bots could become better at that game than people in no time, maybe by the end of this year.


Luckily people aren't completely redundant, yet; we still need to function as consumers.  The bots can't develop their own financial resources and consumption demand requirements, or take personal satisfaction in online exchanges.  Once they do people will finally become obsolete.


No comments:

Post a Comment