A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://phabricator.wikimedia.org/p/mxn/ below:

♟ mxn

The gadget configures Moment.js to match the interface language. If the original reporter was using MediaWiki in the en locale, that would explain the American-style 12-hour clock. They can switch to en-gb for instance to get 24-hour clock.

I’m looking at migrating my Comments in Local Time fork from Moment.js to Intl.RelativeTimeFormat and mediawiki.DateFormatter. Unfortunately, the user script currently relies on some behavior that the new libraries don’t cover: in Moment.js, calendar time can say “Last Monday”, whereas Intl.RelativeTimeFormat would say “5 days ago”. I’ll probably settle for just repeating the day of the week by itself, but this likely won’t be as understandable in some languages other than English. Just noting this in case other gadget or user script developers run into the same problem.

Yes, this is my version of CommentsInLocalTime, which is configurable. The gadget currently defaults to whatever Moment.js considers to be a good default for the current interface language, but I’m looking into migrating it to Intl.DateTimeFormat and mediawiki.DateFormatter (T392532).

I see, then it may have been intentional as well to keep type=route for Pacific Crest Trail example. My impression was that type=route is mostly outdated for relations of relations as in my region these days almost always a different relation type is used for these relations. Your "stems from a bug" assessment sounds odd, though. Relation type superroute after all existed and was used long before this change in JOSM codebase was proposed.

This leaves it a bit unclear what exactly should be supported here. A relation of relations of linear features is often a relation of either type superroute or route_master (see also T312572). It may be desirable to treat these two the same way in our import mapping. Fetching member relation geometries for other types of relations is probably less likely desirable.

In this context Pacific Crest Trail relation appears to be somewhat odd example in its current state. It wasn't a relation of relations until a few years ago. Then it was turned into one but its relation type route wasn't changed. I assume this isn't intentional, and similar to Kungsleden example it should also get type superroute?

Agreed, this is an ideal outcome with respect to wikis that have Vietnamese as the content language. Thank you! Case folding is probably unavoidably more aggressive at a non-Vietnamese Wiktionary, but Wiktionaries tend to include hatnotes or other navigation aids to homoglyphic titles anyways.

Better yet, the link could point directly to OHM’s coverage of a particular date by formatting the link to include a &date= parameter. But I think this would be a little more complex than just changing a configuration file, since the extension would need to recognize an additional parameter in the extension hook too. Regardless, the reader will be able to manually navigate to the correct date using the interactive time slider on OHM’s homepage.

mxn

updated the name of

F58943662: External links.png

from "

ảnh.png

" to "

External links.png

".

GPX is mainly used for GPS tracking

Isn't it useful for someone wanting to, say, put a list of statues in a town into their GPS device, in order to visit them all?

My understanding is that we would have to implement what the libraries are doing ourselves, which doesn't sound too great either because of the maintenance associated with it. What I can not say is how much of the stuff in the libraries is unneeded for our particular case and if there are ways to slim this down. I think this requires a developer with a bit more understanding of the formats to look into it and a sense of which format to focus on.

By the way, I posted a couple workarounds on the OSM forum where this task was crossposted. The tools I mentioned can output GeoJSON; if you need KML or GPX, there are a number of command line conversion utilities, or you can use a Web frontend like geojson.io or mapshaper.

When I get to this URL: https://www-newspapers-com.wikipedialibrary.idm.oclc.org/signin/?next_url=/, I get this error message: There was an error while loading the form. Please contact customer support. I tried changing the URL to www.newspapers.com instead of the hyphens but it automatically reverts back. Is there a fix for this?

I can't create clippings. It tells me to sign in and when I do I see in a pink box "There was an error while loading the form. Please contact customer support".

Searches are working for me. I’m still unable to log in with my personal account through the proxy due to the CloudFlare error I mentioned earlier. Fortunately, I seem to have a workaround in Firefox:

I tested the Vietnamese Wikipedia in Firefox 128.0b5 on macOS 14.5 Sonoma. ULS lacks a Vietnamese input method (T65465), but the Vietnamese wikis have a default gadget that provides this functionality using a similar mechanism. It seems to work fine, except that it temporarily stops working after typing [[, probably because of some automatic syntax highlighting that takes place. An ULS-based input method might have similar behavior.

Regarding https://phabricator.wikimedia.org/T171374#10053186, I tested the Vietnamese Wikipedia in Firefox 128.0b5 on macOS 14.5 Sonoma. This is notable since ULS lacks a Vietnamese input method: T65465. The system’s Vietnamese input methods work fine, as does a Vietnamese keyboard layout that has many dead keys without modifiers, just like the Spanish keyboard layout mentioned above. I recall that both keyboard layouts and input methods used to exhibit this issue in older versions of CodeMirror, though it currently all seems to work regardless of whether the cm6enable parameter is set to 1 or 0, so maybe I’m missing something here.

Would you also like feedback about other IMEs besides ULS, given the related ticket T232920?

Just now, I returned to www-newspapers-com.wikipedialibrary.idm.oclc.org and found that the “Welcome from Wikimedia Foundation” banner had returned and I can once again search the archives. However, the CloudFlare error on the login page still prevents me from logging in and creating clippings. So we aren’t out of the woods yet, but if others are seeing the same improvement, then I suppose TWL can reenable the Access Collection button.

To be clear, I’m not saying I have a solution or even a workaround. I’m explaining what I did to narrow down the issue to that CloudFlare error, in the hope that someone can nudge Newspapers.com to adjust their CloudFlare settings.

I meant that I went directly to www-newspapers-com.wikipedialibrary.idm.oclc.org and clicked the Login button, which I would normally do to connect the Wikipedia Library access to my personal account. But this no longer works because of the CloudFlare error I described.

I use Newspapers.com through the proxy but also log into a subscriptionless personal account simultaneously in order to clip articles. This has mostly worked except for some hiccups back in June, which I resolved by clearing my cookies for both newspapers.com and www-newspapers-com.wikipedialibrary.idm.oclc.org. Unfortunately, today I got logged out and clearing cookies did not help. As others noted, when I tried to log in directly through newspapers.com, I saw a CloudFlare challenge as part of the login form. The proxied login form also attempts to display this challenge, but the XMLHttpRequest to https://challenges.cloudflare.com/cdn-cgi/challenge-platform/h/b/flow/ov1/… fails with an HTTP 400 error. In the console, I see the following error and warning:

As noted in that discussion, QLever can export GeoJSON, with the caveat that its replag is measured in days or weeks and the GeoJSON it outputs is slightly oddly formatted.

How about special-casing Serbian to use sr-Latn?

It’s working fine for me, just took a little while. Thanks for your help!

Revision 8 the 2014 edition of the California Manual on Uniform Traffic Control Devices, the legal standard for all traffic control devices in the U.S. state of California.

There has been recent movement on T172035, but I don’t think this ticket is relevant anymore. The portals long ago migrated from me manually copy-pasting HTML around to Meta sysops generating the HTML using a Lua module to finally automating the whole process without Meta’s involvement (T128546).

The sparse FAQ about rel="me" made it sound like it’s purely about whether the two pages represent the same entity, but this documentation does emphasize the need to make identity consolidation an opt-in, which the feature requested here definitely would not be. I’d be OK with closing this issue unless there’s some way to make the linking more nuanced.

Delete *all* translations and start over, but then there’s a lot of extra work that didn’t need to happen

This issue still affects any language that numbers its months instead of naming them. Chinese, Japanese, Korean, and Portuguese have been mentioned above, to which I’d add Vietnamese. These languages have narrow month “names” that are just bare numbers and instead rely on date formats to append a prefix or suffix to the month name.

The approach I had to take with Vietnamese (separate lexemes per word per writing system, “translations” from one writing system to another) does have some downsides. For one thing, the criteria for a translation between vi and vi-Hani must be stricter than the criteria for a translation between vi and en; otherwise there would be no way to distinguish these transcriptions from translations more generally. In principle, it would follow that every simplified Chinese character should also have a separate lexeme from the corresponding traditional character(s), as on Wiktionary, and we could even take this to the extreme that “colour” is the en-GB “translation” of “color” in en-US. Maybe this wouldn’t be such a slippery slope with a dedicated “transcriptions” or “readings” property alongside the “translations” property, but such a property would be less discoverable by data consumers.

Wikidata’s Vietnamese-language lexemes are currently using vi-x-Q8201 as the language code for chữ Nôm, as a workaround for this issue:

phở: 𬖾, 頗
râu: 鬍, 𩅺, 𩭶, 𩯁

If it is so important that forms not be used for orthographic variants of a non-alphabetic writing system, then the alternative approach would be to store the quốc ngữ and chữ Nôm representations in separate lexemes, as though they’re different languages. We could link individual quốc ngữ and chữ Nôm senses together as translations. This would be broadly consistent with the approach taken on every Wiktionary and render this ticket moot for Vietnamese, but it bends the definition of a language quite a bit.

@mxn If these are purely orthographic variants (i.e. the pronunciation is the same) I would list them under a single lexeme. And in that case, the most natural way would be to list them as spelling variants rather than distinct forms.

The ideal solution would be to allow (in the language code validator) arbitrary language codes including a rank identifier. For instance, for Viatnamese one should be able to use codes such as vi-x-Q8201-1, vi-x-Q8201-2 etc. Currently this doesn't pass the validation as one gets the error Invalid Item ID "Q8201-1".

It sounds like representations need the ability to have qualifiers…

The ideal solution would be to allow (in the language code validator) arbitrary language codes including a rank identifier. For instance, for Viatnamese one should be able to use codes such as vi-x-Q8201-1, vi-x-Q8201-2 etc. Currently this doesn't pass the validation as one gets the error Invalid Item ID "Q8201-1".

Nearly Vietnamese lexeme would be affected by this issue, because one of the two writing systems for the language is phonetic while the other is phonosemantic, resulting in a many-to-many relationship between the two writing systems.

So what is the font to be
used?

Please see this list of fonts: https://en.m.wikipedia.org/wiki/Template:Vi-nom/fonts.css

In other words, "vi-Hani" should refer to Hán tự/Hanzi. A person reading Classical Chinese should be able to discern its meaning and not really notice the source is from Vietnam.

I agree that zh-classical shouldn’t be included in the conversion feature, since no browser or operating system would come with a Classical Chinese localization anyways.

The OpenStreetMap Wiki possibly saw this issue as well at one point, but then it went away. It was only affecting the mr, pam, and tl localizations there.

This change should be communicated on wiktionary village pump

Note from Wikidata Data Re-use Days: @Mike_Peel Q says some Items are huge (e.g. 4.3MB for Q87483673). This is problematic! @Mahir256 noted that this task might be a solution.

There are only search options for wikis with 100,000+ articles, of which Santali is not one yet. This range is somewhat arbitrary and I think it can be changed, maybe to include wikis with 10,000+ articles instead.

Belatedly unassigning myself, since I got distracted away from this task a long, long time ago. But the good news is that @Bharatkhatri351 is actively working on modernizing this portal as part of T286437 and related tasks. (Feel free to claim this task if you think everything it covers is within the scope of your project.)

mxn

added a watcher for

Wikimedia-Portals

:

mxn

.

Wikidata’s Vietnamese-language lexemes are currently using vi-x-Q8201 as the language code for chữ Nôm, as a workaround for this issue:

Future deployments based on the table above should note that there’s a potential for naming conflicts that is already causing some wikis to be branded with the wrong site’s wordmark: T296501.

This affects the logo at the top of every page at the Vietnamese Wikibooks, either when using the mobile interface or when using the new Vector skin. It’s very confusing for readers to see the site branded with another site’s wordmark.

mxn

renamed

T296501: Fix wordmark for viwikibooks, strategy

from

Fix wordmark for Outreach-wiki

to

Fix wordmark for Outreach-wiki, viwikibooks, strategy

.

This is also affecting the Vietnamese Wikibooks. Contrary to the table in T290091, rOMWCbf82bfb3ddcaff04a1e90abc435ccb26f792780c uploaded only a single en-wordmark.svg and used it for the French Wikiquote, Vietnamese Wikibooks, Outreach, and Strategy wikis.

This template implements a workaround using the CSS border-radius property, though built-in support would be much more semantically correct.

Disadvantages:

As noted in T286863#7287345, @MikePlantilla has begun work on a jquery.ime-based reimplementation of Vietnamese IME.

Thanks @MikePlantilla, that looks promising indeed! I look forward to taking the implementation for a spin. Let’s continue the conversation over in T65465.

This change fixes the errors reported above by synthesizing an input method, based on this change in an old version of the AVIM Firefox extension. I tested it in Firefox 56, Firefox 91, Chrome 94, and Safari 13.1 while logged in. (I don’t know of a way to test the new Vector improvements while logged out.)

Looking at https://vi.wikipedia.org/wiki/Đặc_biệt:GadgetUsage , this seems to be a default gadget (so we have no idea how many people use it?).
https://vi.wikipedia.org/wiki/MediaWiki:Gadget-AVIM.js and https://vi.wikipedia.org/wiki/MediaWiki:Gadget-AVIM_portlet.js imply it is some 13 year old "Vietnamese Input Method". It would help a lot to know/understand if modern operating systems have solved some/most/all of what this gadget offers, and potentially disable it?

That’s fair. The main thing we miss out on with the existing DPL extension is the ability to apply styling per language, since each link title could be in a different language with different font needs. But we’ve been making do for years, so it isn’t a huge issue for us.

I think it would be useful to see some examples of how these would be used/what they would be used for.

I'm not sure that ko-kore (or ko-hani) would be the best way to add text containing hanja because wouldn't we want it to be linked to the corresponding hangul? Years ago, I proposed a "hanja" property for Korean (proposal here) . It was rejected at the time but I still think that would be the best way to do it and perhaps it would be a good idea to revisit it.

I imagine it's a similar situation for vi-hani.

Adding chữ Nôm would be a nice improvement for Vietnamese-language Wikimedia projects as well as for federated projects. For example, an infobox at the Vietnamese Wikipedia could display the historical chữ Nôm name of a place or person. OpenStreetMap currently has a nonnegligible number of chữ Nôm names, even though the project is supposed to focus on current details rather than historical details. It’s likely that the people who contributed these names would be less likely to add them to OSM if they were able to add them to Wikidata, where historical nuances could be better accounted for (like the distinction between Ho Chi Minh City and Saigon).

This would also be useful for adding a basic visualization of the data for some kinds of tabular data. For example, many of the Data talk: pages in https://commons.wikimedia.org/wiki/Category:Tabular_data_of_COVID-19_cases contain time series graphs of the case tables.

As an example of how this feature would be useful on Wikimedia wikis, https://commons.wikimedia.org/wiki/Category:Tabular_data_of_COVID-19_cases is populated entirely by pages in the Data talk: namespace because it isn’t possible to add the data tables directly to the category.

Wikimedia Maps appears to be pretty much caught up now, based on some spot checks of recent edits I’ve made to OpenStreetMap. For example, https://www.openstreetmap.org/changeset/93204365 added a covered bridge and creek that show up in https://maps.wikimedia.org/#19/39.71640/-81.61655.

For what it’s worth, the relation I reported in https://phabricator.wikimedia.org/T218097#6007413 has begun showing up in mapframes, so something has begun moving, though I don’t know if Wikimedia Maps is fully caught up.

We could significantly reduce the number of API calls by not having the script query overpass for certain classes of item (instances of human, taxon, academic paper, astronomical object, language, genome, chemical compound, etc.)

If the map display could also not appear for those classes, so much the better.

I think that would require a WDS query on every page load, which would spare one API by hammering another.

Would it? I'm no coder, but I would have thought the script can read the content of the page on which it occurs.

I think that would require a WDS query on every page load, which would spare one API by hammering another. 😉

While not perfect, the Overpass user script does enable users to see which OpenStreetMap features are linked to the Wikidata item. (This is distinct from P625, which is about coordinates rather than OSM features.) The original idea at the hackathon was to have this functionality incorporated into Wikidata as a gadget, so I’ll leave this issue open.

One strategy I often see in table editors (both on the Web and on native platforms) is to maintain only a single input field and move it around to cover the selected cell. (On Apple platforms, this is called the “field editor”.) Unselected cells are otherwise just plain text. That keeps things lightweight even for large tables and makes it more straightforward to swap in other kinds of input fields depending on the data type.

T63989 implemented an editing interface for PageForms using jsGrid, which seems suitable for editing the data inside tabular data inline, if not the structure too.

Should we try to unify the functionality in https://en.wikipedia.org/wiki/Template:Json2table and https://en.wikipedia.org/wiki/Module:Tabular_data? Or maybe it’s better to have one focus on “raw” output and the other focus on polished formatting?

@eprodromou had a similar idea with https://en.wikipedia.org/wiki/Module:Covid19Data, which is specific to COVID-19 case count tables. (https://en.wikipedia.org/wiki/Template:Medical_cases_chart can also display such a table as a bar chart.)

This is now one of the functions in https://en.wikipedia.org/wiki/Module:Tabular_data. For this function to be useful to articles that currently hardcode tabular data, there will probably need to be styling and data formatting options, and the sources should probably go in <ref>.

mw.ext.data.get() returns the tabular data for a page as a Lua table. A starting point would be to output a wikitable that formats the tabular data exactly as on Commons.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4